Ethics Exam PDF
Document Details
Uploaded by EnviousSeattle1984
University of Groningen
Tags
Summary
This document appears to be lecture notes on ethics, particularly related to research and engineering. It discusses topics like the is-ought problem, research ethics committees (RECs/MRECs/MERCs/METCs), and different types of research misconduct. Examples of medical and biomedical engineering are included. The lecture notes include discussion questions related to the subject matter.
Full Transcript
Ethics EXAM 1. Lecture slides Lecture 1 Ethics is not the same as: tradition, social rules, religion, law, etiquette But it can relate to each of these in different ways The term ‘morality’ is used in different ways both in scholarly literature and beyond (particular...
Ethics EXAM 1. Lecture slides Lecture 1 Ethics is not the same as: tradition, social rules, religion, law, etiquette But it can relate to each of these in different ways The term ‘morality’ is used in different ways both in scholarly literature and beyond (particular moralities vs. universal morality) What is Ethics? Theoretical considerations, using logic and careful reasoning, about what is right or good, and what is not, and why Giving and considering reasons when justifying our action, or refraining from action; a systematic intellectual/philosophical discipline Ethics includes being open for arguments and for changing our position if we are presented with good enough reasons Reflecting on precepts of a particular morality, assessing them critically, and only then trying to determine what is the best position and the right cause of action Is-ought problem https://www.youtube.com/watch?v=eT7yXG2aJdY Important idea in Western Ethics Is/Ought Problem (David Hume, Immanuel Kant) “‘Ought’ cannot be derived from ‘is’!” …or: Normative claims cannot be derived from factual claims (at least not directly or exclusively). To claim otherwise is to commit the naturalistic fallacy: mistakenly assuming that what is ‘natural’ or ’factual’ is automatically ‘good’ or ‘right’ ‘Ought’ cannot be derived from ‘is’! Reflect and discuss (in pairs) Questions about the relation between facts and values: 1. Think of examples: deriving normative statements from descriptive ones, or (misre)presenting normative statements as descriptive ones…. Do you know of any other examples (than meat- eating)? *Note: It might be something you might agree with, but is at least subject to critical reflection in the light of is/ought problem… 2. In what way are empirical facts relevant for moral judgements or ‘shoulds’, and therefore for ethical reasoning? 3. What does this mean for the relationship of science and ethics? *Note: think of medical science and biomedical engineering contexts for examples Three Domains of Ethics in Science and Engineering Lex Bouter (VU): Research Ethics (RE), narrowly conceived, concerns the ethical considerations of research with humans and animals. Research/Academic Integrity (RI), also called ‘Good Scientific Practice’ (GSP) in Europe, concerns the kinds of behaviour of researchers that either hampers or establishes validity (truth, reliability) of research, and/or trust in science and between scientists. The ‘three big sins’ against research integrity: Fabrication, Falsification, Plagiarism (FFP). Responsible Research and Innovation (RRI) concerns the benefits and harms of research for society and the environment (term used mostly in the EU) Three Domains of Ethics in Science and Engineering All three – RE, RI, and RRI – fall under the domain of broader field of ethics in science and engineering (!Caution: the reading “Research Ethics: An Overview” calls this broader field ‘Responsible Conduct in Research’ – this is because the US discourse is a bit different than the one that became customary in Europe) Research Ethics (involving human and/or animal subjects) is regulated by law in most developed and many developing countries Research/Academic Integrity is much less regulated, and most often indirectly (mostly regulated within research institutions, who act only within their powers). Why do you think that is? Three Domains of Ethics in Science and Engineering RE, RI, and RRI in the ethics of BME RRI has special relevance for engineering. It is being increasingly regulated, but still only in some aspects and often indirectly (situations where the likely unintended harm can be clearly assessed, or where political pressure is high, etc.) Biomedical Engineering often brings together all thee areas of ethics in research – RE, RI, and RRI History of Research Ethics Codes historicity of the codes of ethics Mostly, they came to be as responses to things going terribly wrong Provide some solutions to the potential conflicts between scientific perspective/‘objectified gaze’ of science on sentient beings on one hand, and the principles of human dignity and rights, as well as animal welfare, on the other Today, in all countries that have it, research ethics codes are based on, but also exceed, the basic legal framework of human rights In EU, important new legal framework on which the codes of research ethics had to be updated recently was GDPR (Gen. Data Prot. Regul.) History of Research Ethics (RE) Nurenberg Code (1947, International): Application of basic human rights in research involving human subjects Declaration of Helsinki (1964- … 2024, International): Establishes the importance of Ethical Review Committees (among several other things) Henry Beecher (1966, US): Further development of morally satisfiable notion of Informed Consent Belmont report (1979, US): three fundamental ethical principles (respect for persons, beneficence, justice) that should underlie the conduct of human subjects research and should be reflected in regulatory requirements The US Framework: ‘Responsible Conduct of Research’ (RCR) The US governmental Office of Research Integrity’s (ORI) nine core areas of Responsible Conduct of Research (“Research Ethics: Overview”, p. 586): 1. Data acquisition, management, sharing, and ownership 2. Conflict of interest and commitments 3. Human subjects 4. Animal welfare 5. Research misconduct 6. Publication practices and responsible authorship 7. Mentor/trainee responsibilities 8. Peer review 9. Collaborative science Research Integrity RI is about personal integrity of researchers, also in research where no humans or animals are directly involved… It is about the values that should guide any scientific practice in order to uphold science’s integrity Despite the encouraging development and implementation of research ethics codes and committees, scientific research and engineering remains a ‘human enterprise’ This means that research remains vulnerable not only to human error, but also fraud and other kinds of unethical behaviour Research Integrity An awareness of the discrepancy between an ideal of ‘pure science’ on the one hand and the imperfect reality of scientific practice (plagued and influenced by various selfish interests and questionable practices) should not, however, lead to cynicism There are principles, mechanisms and institutions in place which try to prevent, discover and remove fraudulent practices in science. They have resulted from lessons learned (sometimes painful) through the centuries of development of science as an embedded social practice Greatest sins against scientific integrity (research misconduct): Fabrication Falsification Plagiarism … or ‘FFP’. Plagiarism Basic idea is simple: plagiarism is an unethical practice of ascribing authorship of other people’s ideas, arguments or text to oneself, … i.e. falsely presenting something which is not your own as your own. Injunction against plagiarism is an application of two general and moral rules to scientific research and academic writing (where it becomes a meticulous, exact affair): Do not claim something as yours/of your own creation if it is not Give credit where credit is due Plagiarism Basic categories of plagiarism (source: RUG Language Centre) literally copying the work (or parts thereof) of others without indicating that it concerns someone else’s words and/or mentioning the exact place where the text was found Improper paraphrasing – paraphrasing the work (or parts thereof) of others without indicating that it concerns someone else’s ideas and mentioning the exact place where the ideas were found Copying ideas from someone else’s work without indicating that they are someone else's ideas (presenting the ideas of others as your own) Self-plagiarism (reusing your own but previously submitted or published work while not stating/recognizing that) FFP Fabrication and falsification concern mainly handling of the raw data in empirical research (measurements, both lab and field work…) Three main and most general rules: Raw data should never be changed or modified (data falsification) No parts of it should be removed from it (data falsification) No new data can be added to it (data fabrication) Various regulations and directives in the EU Medical Devices: -MDR: Medical Device Regulation This Regulation aims to ensure the smooth functioning of the internal market as regards medical devices, taking as a base a high level of protection of health for patients and users, and taking into account the small- and medium-sized enterprises that are active in this sector. At the same time, this Regulation sets high standards of quality and safety for medical devices in order to meet common safety concerns as regards such products. Both objectives are being pursued simultaneously and are inseparably linked whilst one not being secondary to the other. In Vitro Diagnostic Device Regulation (IVDR) Medical This regulation harmonises the rules in the EU for placing on the market and putting into service of an in vitro diagnostic medical device (IVD) and their accessories. It sets high standards of quality and safety for IVDs. In Vitro Diagnostic Device Regulation (IVDR) Medical This regulation harmonises the rules in the EU for placing on the market and putting into service of an in vitro diagnostic medical device (IVD) and their accessories. It sets high standards of quality and safety for IVDs. Medicinal products -Regulation: 726/2004 (Lays down the procedures) The purpose of this Regulation is to lay down Community procedures for the authorisation, supervision and pharmacovigilance of medicinal products for human and veterinary use, and to establish a European Medicines Agency Definition conform MDR Medical device: means any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes: diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease, diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability, investigation, replacement or modification of the anatomy or of a physiological or pathological process or state, providing information by means of in vitro examination of specimens derived from the human body, including organ, blood and tissue donations, and which does not achieve its principal intended action by pharmacological, immunological or metabolic means, in or on the human body, but which may be assisted in its function by such means. Medical device classes Duration of use Invasiveness Natural opening Surgical Implants Active devices Custom made In-house manufactured Software Lecture 2 › REC Research Ethics Committee › MREC Medical Research Ethics Committee › MERC Medical Ethics Review Committee › METC Medisch Ethische Toetsingscommissie › IRB Institutional Review Board (US/UK) Clinical and research ethics History Nuremberg Code (1947)voluntary participation, informed consent withdrawal at any time Declaration of Helsinki interests human subject most important (1964) MREC review by ethical committee (1975) › Why ▪ Safety of human subjects ▪ Legal protection for researchers ▪ Quality assurance, added value to science › What → responsibility and accountability › How → rules and regulations Assessment of research protocols → WMO 1. It concerns medical scientific research and; 2. Participants are subject to procedures or are required to follow rules of behaviour Examples WMO research: Pharmaceutical research Medical device Research in behavioural sciences or psychology Assessment of research protocols → nonWMO 1. It concerns medical scientific research that is not invasive (i.e. participants are not subject to procedures or rules of behaviour) 2. It is initiated and/or financed by the pharmaceutical industry Examples of nonWMO research: Retrospective research (no intervention or burden for the subjects) Observational, non-invasive research (e.g., human movement sciences) Evaluation of care (questionnaire on the implementation of a treatment Initiation of a bio- or databank WMO or non WMO? - A new drug is tested in a radomised study population. 50% receives the new drugs, 50% is giving a placebo. - Researchers are testing knee prosthetics on healthy participants. Subjects are asked to fill in a diary daily about their state of wellbeing. The trial runs for 5 years. Data is stored in a databank. Ethics in science and engineering Safety Protection of human subjects › Physical/emotional safety ▪ Physical risks and burden (e.g. experimental treatment) ▪ Emotional/mental risks and burden (e.g. invasive questionnaires) › Data privacy ▪ International, transdisciplinary point of attention ▪ GDPR ▪ WMO and nonWMO research both Data privacy GDPR (personal data) › Personal data: “any information that relates to an identified or identifiable living individual” › Special categories of personal data: genetics, religion, health data › Processing special categories of personal data is prohibited (art.9 §2) ▪ Unless: informed consent ▪ Exceptions apply (e.g. public health interest, monitoring) (De-)identification of data: › Identifiable data: personal data (incl. special categories) › Pseudonymised data ▪ can be related back to an individual subject with additional information ▪ e.g. use of a code list › Anonymous data: not able to relate back to an individual subject ▪ Not based on personal data ▪ Not based on individual characteristics Human tissue Legal framework: Wet Zeggenschap Lichaamsmateriaal (WZL) (Say/control/ownership Human Tissue Act) › Collection (in life) only possible with informed consent ▪ Exceptions apply › Informed consent can be withdrawn at any time ▪ Collected tissue will be destroyed Case study: Stem cell research › General ▪ collection (in life) only possible with informed consent ▪ informed consent can be withdrawn at any time, tissue will be destroyed › Stem cell research ▪ Applies to the performance of acts with the processed live cellular material ▪ An immortal cell line does not have to be destroyed What are ethical considerations of the legal framework (WZL) considering stem cell research? Regulations - Nuremberg Code ▪ Voluntary participation ▪ Withdrawal at any time - Declaration of Helsinki ▪ Interest of human subjects exceeds study objectives ▪ Review by ethics committee Ethical considerations legal framework (WZL) stem cell research › The informed consent ▪ Is it possible to ask informed consent? (e.g. information hard to understand) ▪ How broad or narrow is this consent? › Immortal cell lines ▪ What will happen with this immortal cell line? (e.g. what kind of research) ▪ Informed consent to what exactly, what is the future research? ▪ What are the possibilities of stem cell research in the future? If immortal stem cell lines are not destroyed, to what does a human subject give informed consent? Is the interest of the human subject indeed more important than the possible research outcomes? Tutorial Guidance Ethics approach o Phase 1 o Defining case study o Phase 2 o Different actors (relevant stakeholders) o What are the effects of x? hopes,fears, practical things o What are the underlying values? o Phase 3 o Options for action (human, environment, technology) o How do these options increase the identified values? 2. Ethics, Science, Technology and Engineering THE RESPONSIBLE CONDUCT OF RESEARCH AND GOOD SCIENTIFIC PRACTICE Progress in science depends on trust between scientists that results have been honestly presented. It also depends on members of society trusting the honesty and motives of scientists and the integrity of their results (ESF and ALLEA 2011). Fostering this trust requires clear and strong ethical principles to guide the conduct of scientific research. In the United States, ethical research practice is generally referred to as RCR or the responsible conduct of research. The ORI, the US federal agency primarily concerned with education in RCR, has identified nine core instructional areas in RCR (ORI 2011; Steneck 2004). 1. Data acquisition, management, sharing, and ownership. This area focuses on the ways in which data are recorded, whether in notebooks or in other formats (such as electronic records, photographs, slides, etc.), and how and for how long they should be stored. It explores as well the question of who owns the data, who is responsible for storing them, and who has access to them. Issues of privacy and confidentiality of patient information, as well as intellectual property issues and copyright laws, are included. 2. Conflict of interest and commitments. Discussion of conflicting interests and commitments acknowledges the potential for interference in objective evaluation of research findings as a result of financial interests, obligations to other constituencies, personal and professional relationships, and other potential sources of conflict. It also considers strategies for managing such conflicts in order to prevent or control inappropriate bias in research design, data collection, and interpretation. 3. Human subjects. Ethical treatment of human research subjects references the requirements of the Office of Human Research Protections (OHRP), which are based on the ethical principles outlined in the Belmont Report (National Commission 1978). These principles include especially: (a) respect for persons as expressed in the requirement for informed consent to participate and protection of vulnerable populations such as children and those with reduced mental capacity; (b) emphasis on beneficence that maximizes the potential benefits of the research and minimizes risks; and (c) attention to considerations of justice in the form of equitable distribution of the benefits and burdens of the research across populations. Adequate attention to patient privacy and the variety of potential harms, including psychological, social, and economic, is essential. 4. Animal welfare. Research involving animals emphasizes animal welfare in accordance with the regulations of the Office of Laboratory Animal Welfare (OLAW). Principles here emphasize respect for animals used in research (Russell and Burch 1959) in accordance with the “three Rs”: reduction of the number of animals used; replacement of the use of animals with tissue or cell culture or computer models or with animals lower on the phylogenetic scale whenever appropriate and possible; and refinement of the research techniques to decrease or eliminate pain and stress. 5. Research misconduct. Dealing with allegations of research misconduct is essential given its potential for derailing a research career. Definitions of scientific misconduct, including fabrication, falsification, and plagiarism, as well as other serious deviations from accepted practice that may qualify as scientific misconduct (as distinguished from error) and protections for whistleblowers, are important components of this area. 6. Publication practices and responsible authorship. Publication practices and responsible authorship examine the purposes of publication and how they are reflected in proper citation practice, criteria for authorship, pressure to publish, and multiple, duplicate, and fragmentary publication. This area also considers allocation of credit, the implications and assumptions reflected in the order of authors, and the responsibilities of authorship. 7. Mentor/trainee responsibilities. The mentor/trainee relationship encompasses the responsibilities of both the mentor and the trainee, collaboration and competition, possible conflicts, and potential challenges. It also covers the hierarchy of power and the potential for abuse of power in the relationship. 8. Peer review. The tension between collaboration and competition is embodied in the peer-review process for both publication and funding. In this area of research practice, issues associated with competition, impartiality, and confidentiality are explored, along with the specifics of the structure and function of editorial and review boards and the ad hoc review process. 9. Collaborative science. Not only does research build on the work of others, but more and more investigators from disparate fields work together. The collaborative nature of science requires that often implicit assumptions about common practices, such as authorship and data sharing, need to be made explicit in order to avoid disputes. In Europe, the term of art for discussion of research ethics is GSP or good scientific practice (ESF and ALLEA 2011; BBSRC 2013). However, while RCR emphasizes guidelines for positive research behaviors, the European focus is on broad general principles: honesty in communication; reliability in performing research; objectivity; impartiality and independence; openness and accessibility; duty of care; fairness in providing references and giving credit; and responsibility for the scientists and researchers of the future. In addition, there is a tendency in Europe to emphasize the avoidance of negative behaviors. At the same time, European consideration of research ethics is more likely to highlight the social responsibility of scientists and engineers than in the United States (Bird et al. 2013). With the creation of the Danish Committee on Scientific Dishonesty in 1992, Denmark became the first European country to form a national body to handle cases of scientific dishonesty—again with the aim of promoting GSP. This development has prompted similar practices in other Scandinavian countries (Vuckovic-Dekic 2000). Serious cases of scientific misconduct in Germany prompted the German Research Council (Deutsche Forschungsgemeinschaft [DFG]) to create an international commission on professional self- regulation in science (Schneider 2000). This commission was charged with exploring causes of dishonesty in the science system, discussing preventive measures, examining the existing mechanisms of professional self-regulation in science, and making recommendations on how to safeguard them. It published a report titled Proposals for Safeguarding Good Scientific Practice, which advised relevant institutions (universities, research institutes, and funding organizations) to establish guidelines for scientific conduct, policies for handling allegations, and rules and norms for good practice (DFG 1998; Schneider 2000). The commission recommended that institutions retain authority for establishing misconduct policies (rather than establishing a centralized committee as in the United States and Denmark). APPLICATION OF RESEARCH FINDINGS The Enlightenment creed Sapere aude! (Dare to know!) symbolized the distinctively modern belief that scientific research is an ethical responsibility, indeed a moral obligation of the highest order. Premodern thinkers generally maintained that there were limits to the quest for knowledge, beyond which lay spiritual and physical dangers. Although there is a long tradition of critiques of this foundational modern commitment (e.g., Johann Wolfgang von Goethe’s Faust and Mary Shelley’s Frankenstein), they became more refined, extended, and institutionalized in the latter half of the twentieth century as science and technology began to profoundly alter both society and individual lives. The ramifications of various technological developments (e.g., atomic energy and genetic engineering) have demonstrated that unfettered research will not automatically bring unqualified goods to society. Daniel Callahan (2003) argued that there is a widespread assumption of the “research imperative,” especially in the area of biomedicine and health care. Though a complex concept, it refers to the way in which research creates its own momentum and justification for gaining knowledge and developing technological responses to diverse medical conditions. It can pertain to the ethically dubious rationale of pursuing research goals that are hazardous or of doubtful human value, or the rationale that the ends of research justify the means. It can also pertain to the seemingly noble goal of relieving pain and suffering. Yet this commitment to medical progress has raised health-care costs and distracted attention from the ultimate ends of individual happiness and the common good. Research, no matter how honorable the intent of those performing and supporting it, must be assessed within the context of other goods, rather than elevated as an overriding moral imperative (Jonas 1969; Rescher 1987). As is considered in entries on “Science Policy” and “Governance of Science,” the core assumption of the inherent value of research was operationalized in the United States in post–World War II governmental policies for the funding of scientific research. What came to be known as the “linear model” of science-society relations posited that investments in “basic” research would automatically lead to societal benefits (Price 1965). However, the framers of this policy never specified how this “central alchemy” would occur, and they did not adequately address the need to mitigate the negative consequences of scientific research (Holton 1979). The economic decline of the late 1970s and 1980s, the end of the Cold War in the early 1990s, and the growing federal budget deficits of the same period combined to stimulate doubts about the identity of purpose between the scientific community and society (Mitcham and Frodeman 2004). The very fact that societal resources are limited for the funding of scientific research has stimulated questions about what kind of science should be pursued. For instance, physicist and science administrator Alvin Weinberg (1915–2006) argued in the 1960s that internal assessments of the quality of scientific projects and scientific researchers should be complemented by evaluation of scientific merit (as judged by scientists in other disciplines), technological merit, and social merit. For Weinberg, because of the inherently biased perspective of those within the community, “the most valid criteria for assessing scientific fields come from without rather than from within the scientific discipline that is being rated” (1967, 82). While the internal ethics of research asks “How should we do science?” the external ethics of research takes up a suite of questions involving participants beyond the immediate scientific community and addressing more fundamental ends. As Daniel Sarewitz noted, the pertinent questions are “What types of scientific knowledge should society choose to pursue? How should such choices be made and by whom? How should society apply this knowledge, once gained? How can ‘progress’ in science and technology be defined and measured in the context of broader social and political goals?” (1996, ix). Myriad attempts have been made to reformulate the relationship between scientific research and political purposes, where the criteria for assessing science derive partially from without rather than from within a particular scientific discipline. Models include Philip Kitcher’s ideal of “well-ordered science” (2001) and the concept of “useinspired basic research” put forward by Donald Stokes (1997). Such revised social contracts for science shift the focus from maximizing investments in research to devising mechanisms for directing research toward societal benefits; a shift from “how much?” to “toward what ends and why?” Legislation such as the 1993 US Government Performance and Results Act (GPRA) reflects this focus on the social accountability of publicly funded science, as do institutions that assess technology (e.g., the Dutch Rathenau Institute and the Danish Board of Technology) and research aimed at examining the ethical, legal, and social implications of various types of research and technology (e.g., studies performed in conjunction with genome and nanotechnology research) (see Juengst 1996). The prioritization of research projects is another important area in this regard, including the issue of how much money to allocate to the study of different diseases (which often raises ethical concerns about systematic discrimination). The effective use of scientific research and technologies in policies intended to decrease poverty and improve the health of those in developing countries is a related topic. Diverse experiences with the Green Revolution, for example, show the importance of context in directing research toward common interests and away from negative outcomes, such as ecological harms and the exacerbation of wealth disparities (see Shiva 1992; Evenson and Gollin 2003). These topics raise the important issue of the role of various publics in guiding and informing scientific research and technological applications. Although it is still largely true that “more money for more science is the commanding passion of the politics of science” (Greenberg 2001, 3), a number of critics and policymakers understand that more is not necessarily better. Scientific progress does not always equate to societal or personal progress in terms of such goals as safety, health, and happiness (Lightman, Sarewitz, and Desser 2003). The potential unintended physical harms that may result from scientific research have long been recognized and debated in terms of the roles of scientists and nonscientists in risk assessment. More recent developments, especially in bio- and nanotechnology research, and the growing specter of catastrophic terrorist attacks have lent a more urgent tone to questions about “subversive truths” and “forbidden knowledge” (e.g., Johnson 1996; Bird and Marchant 2009). Limiting scientific research raises practical questions, such as “Who should establish and administer controls?” and “At what level should the controls be imposed?” (Graham 1979). Some (e.g., McKibben 2003) have advocated the large-scale relinquishment of whole sectors of research, such as nanotechnology. Others, including the innovator Ray Kurzweil (2005), argue for a more finegrained relinquishment and the prioritizing of funding for research on defensive technologies to counteract potential misuses of science. This view holds that the optimal response to the potential for bioterrorism, for example, is to lessen restrictions on and increase funding for bioweapons research so that preventive measures and cures can be developed. Discussion of the ethical implications of the use of scientific research is, at its core, about procedures for democratic decisions and the allocation of authority and voice among competing societal groups. This can be construed in broad terms ranging from criticisms of Western science as a dominant, even hegemonic, way of knowing that drowns out other voices, to defenses of science as an inherently democratizing force where truth speaks to power. These vague issues take on importance in concrete contexts that concern judgments about the appropriate degree of scientific freedom and autonomy within democratic societies. The most important area in which these issues arise is the use of scientific knowledge in formulating public policies. Although bureaucratic political decision making has come to rely heavily on scientific input, it is not obvious how the borders and interstices between science and policy should be managed. On the one hand, it seems appropriate that research undertaken by scientific advisory panels (as distinct from research in general) be somehow connected to the needs of decision makers. On the other hand, sound procedures for generating and assessing knowledge require a degree of independence from political (and corporate) pressures. Failure in the first instance leads to generation of irrelevant information and often delayed or uninformed action. Failure in the second case leads to conflicts of interest or the inappropriate distortion of scientific facts to support preexisting political agendas (Lysenkoism is an extreme example) or corporate policies. The latter instance is often couched in terms of the “politicization of science,” which is a perennial theme in science-society relationships. Yet in order to attain the democratic ideal of being responsive to the desires and fears of all citizens, the politicization of science in the sense of explicitly integrating it into the larger matrix of goods (and evaluating it from that standpoint) is proper. Scientific research can be “misused” when it is inappropriately mischaracterized (e.g., to overhype the promise of research to justify funding) or delegitimized (e.g., when conflicts of interest undermine trust in a scientific process [Pielke 2007]), and it is important to enforce ethical guidelines against these practices. However, the more common misuse of science that ranges from intentional to unconscious, is the practice of arguing moral or political stands through science (Longino 1990). This can inhibit the ethical bases of disputes from being fully articulated and adjudicated, which often prevents science from playing an effective role in policymaking (Sarewitz 2004). 3. UMCG research code Regulations and ethical scientific research The UMCG Board of Directors is ultimately responsible for all scientific research performed by UMCG staff, at the UMCG or elsewhere, as well as for quality assurance and quality control at all stages of the research process and for the promotion of academic integrity. UMCG researchers and research staff should be knowledgeable, well-trained, and aware of the legislation and regulations that apply to the research in question and should comply with this legislation and regulations. There are dedicated Research Support Teams that help researchers meet procedural quality requirements (Research Support Service Portal or reachable via [email protected]). Individual departments also have a Research Coordinator who can sometimes provide support. General principles for scientific research The following general principles apply to all scientific research. The head of the department is responsible for all scientific research conducted at or from the department. A study can be started only after their consent. A principal investigator is available and responsible for all aspects of a study. All research takes place within the context of a research programme or group. The basic principle is that the public and/or patients are involved before the start of a study. (e- learning module, participation compass, patient participation UMCG, CCMO). The study’s research goals and methods are laid down in a high-quality and complete research protocol with a DMP. The use of eLabJournal is mandatory for laboratory research. All studies in human subjects are recorded in the UMCG Research Register (see UMCG Research Register) via Superusers of the department. In all cases, the registration will contain ProjectID and title Names of the UMCG staff involved in the design or conduct of the project Study type and characteristics Funding, parties involved, and contracts Study population, privacy measures, and data management Any experimental research with human subjects is preregistered in a public register such as the European Clinical Trial Register or ClinicalTrials.gov. Pre-registration is also strongly encouraged for other types of research. This can be done, for example, via the platforms of the Open Science Foundation or Prospero (systematic reviews). Some types of research require that specific guidelines or procedures be followed. It is the researcher’s responsibility to determine which legislation and regulations apply to the research and to comply with them. Examples of such research include Research with ionising radiation (Radiation safety, see Ionising radiation (nuclear energy law)) Research on genetically modified organisms (see Genetically modified organisms (and biological agents)) Research with genetic material from abroad (Nagoya-protocol, see Genetic material from abroad) Research with human subjects (see Respect for research participants) Research with laboratory animals (see Handling laboratory animals) Research with AI or where software such as apps are developed or studied. With the exception of AI as an analysis method, this must be coordinated with the ICT adviser, who checks this with IMO. If the AI or software is a medical device, the Relevant Expert must be involved. International collaboration (see National Guidelines on Knowledge Security for Safe International Collaboration (Knowledge Security Office, See Knowledge security)). The various steps and decisions within the study are completed in a timely manner and are well documented. The UMCG Research Register and the Research Toolbox will help the researchers with this. The researcher regularly organises critical feedback through work discussions or by appointing a supervisory or steering committee. For larger or high-risk studies, an external committee is recommended and, if necessary, a Data Safety Monitoring Board. Researchers ensure a safe working environment (see, for example, dangerous substances database and Genetically modified organisms (and biological agents)). Researchers use materials and energy sparingly, both at the UMCG and during business trips. Respect for research participants When doing research on human subjects, participants should be treated with respect. Their participation in the research is entirely voluntary, and their health, safety, and rights must be protected. This is the responsibility of the UMCG and the researchers. Research participants are protected by legislation and regulations. For an overview, see the website of the CCMO, the Code of Conduct for Health Research 2022, and the Netherlands Code of Conduct for Research Integrity 2018, p.28 (UNL). There are two types of research with human subjects: WMO-governed research This is scientific research that falls under the Medical Research Involving Human Subjects Act (WMO). This applies to medical-scientific research in which participants are subject to actions or have rules of behaviour imposed on them. Non-WMO-governed research (nWMO) This is scientific research with human subjects that does not meet the above definition of WMO- governed research. The CCMO provides tools to determine whether research is governed by the WMO. WMO-governed research The WMO is currently the most important law for research involving human subjects in the Netherlands. Research with medical devices (including many apps), in vitro diagnostics, and medicines is regulated by European regulations. For these types of studies, the WMO refers to the MDR, IVDR and CTR. It is mandatory that all clinical trials follow the guidelines for good clinical practice (for research with medical devices, see ISO 14155:2020; for drug research, see ICH-GCP E6(R2)). The GCP Directive is also a good guideline for many other types of research. Explanation of the WMO The WMO is based on the Nuremberg Code, the Helsinki Declaration, and the ICH-GCP. The WMO concerns medical-scientific research in which people are subject to actions or have rules of conduct imposed on them. The main objective of the WMO is to protect the subjects in such research and to guarantee the integrity of the research data. For research that falls under the MDR, IVDR, or CTR, those European regulations are referred to. For other research governed by the WMO, the following protective measures must be taken: The study should be as safe as possible for the subjects and should put as little burden on them as possible. Proportionality is key here: the benefits of the study must outweigh the disadvantages/risks for the research participant. The research must be evaluated against medical, scientific, and ethical standards and approved before the start of the study. The subject must be provided with written information about the study (CCMO Model Subject Information) and given sufficient time to reflect on their participation. An independent expert must be available to inform the subject. Subjects must give written informed consent before participating in the study. The sponsor must take out liability insurance and WMO Subject Insurance to cover any damage to the subjects. Under certain conditions, exemption from the WMO Subject Insurance obligation is possible. Those conducting the research must ensure that the privacy of the subjects is adequately protected. The WMO sets additional requirements for research with subjects under the age of 16, pregnant or breastfeeding women, and people who are temporarily or otherwise unable to give informed consent (for example, people with dementia or those in acute care situations). The UMCG follows the NFU Guideline Quality assurance of research involving human subjects 2023. This guideline describes the minimum requirements that must be met in research on human subjects in UMCs. In accordance with this guideline, the UMCG has implemented its policy and procedures for research governed by WMO in the Research Management System. Inclusion of subjects for a WMO-governed study may only start after approval of a recognised METc or the CCMO. The CCMO describes which research areas it reviews and provides tools to determine by which committee a WMO-governed study must be assessed. The METc UMCG is recognised by the CCMO. All research governed by the WMO must be monitored Monitoring is the responsibility of the sponsor. This means that in a researcher-initiated WMO-governed study, the researcher is obliged to arrange monitoring for the participating centres. Monitoring depends on the risk classification: The principal investigator may request a monitor from the UMCG Monitor pool for projects with negligible risk. The principal investigator can also hire a monitor. The principal investigator should hire an expert monitor for medium or high-risk projects. To this end, the principal investigator should request a cost quote from the SD CRO. For drug trials, sponsors are legally required to publish the results in the European Union Clinical Trials Database (EudraCT, for studies under the CTD) or in the CTIS (for studies under the CTR) within one year of the completion of the study. All clinical researchers involved in WMO-governed research (see Objective in Training and examination regulations OER eBROK 2020) are required to complete the Basic Course on Regulations and Organisation of Clinical Trials (eBROK®) of the NFU. UMCG researchers can find information about the eBROK® course here. For others involved in WMO-governed research (research nurses, students, etc.), WMO-GCP training is mandatory. See overview. nWMO-governed research nWMO-governed research must be submitted to the CTc UMCG if it meets at least one of the following criteria. The UMCG is ultimately responsible. Research participants are included in or from the UMCG (either through informed consent or the ‘no objection’ system). It concerns the start of a biobank or databank for future scientific research or the release of stored materials or data to researchers. A national coordination effort is currently underway to harmonise this assessment between institutes and, in the near future, to triage study files. The CTc assesses the submitted files against relevant legislation and regulations, including the WGBO, the GDPR, and the Code of Conduct for Health Research 2022. Researchers who want to use healthcare data for research can use the NFU Guideline: Reuse of healthcare data for scientific research. In the case of research with biomaterials, the WZL will apply once it comes into effect. Handling laboratory animals The protection of laboratory animals is a key focus of the Wod. This law prohibits the use of animals in research if the research question can also be answered without their involvement. The three ‘Rs’ should be followed here: 1. Replacement Use of laboratory animals is prohibited if the research question can be answered without their use 2. Reduction The number of animals used should neither exceed nor fall short of what is necessary to answer the research question. 3. Refinement The discomfort for the animals should be minimised and limited as much as possible before, during, and after the experiment. All UMCG staff members involved in animal experiments are required to conduct them with integrity following applicable standards and safety regulations. The fundamental principle of the Wod is that an animal has intrinsic value. This recognition places obligations on researchers, for example they must take the intrinsic value of the animals and the species-specific behaviour into account when designing an experiment. Experiments on animals can only be conducted after obtaining a permit from the national CCD for a specific project. Genetically modified organisms (and biological agents) Research with a product based on or produced by genetically modified organisms will be subject to authorisation under the Decision on genetically modified organisms environmental management. For laboratories, a Contained Use Permit must be applied for through the Biological Safety Officer. Occupational health and safety rules also apply for work with biological agents. For more information, see the Biological Safety Manual and Genetically Modified Organisms in Research. For clinical research, the researcher must apply for a ‘release into the environment’ permit through the Environmental Safety Officer. The clinical study and the associated permit application are assessed by the CCMO and the Government Gene Therapy Office. Gametes and embryos Scientific research with embryos, residual embryos, and gametes used to create embryos falls under the Embryo Act. This research should be reviewed by the CCMO. The CCMO offers tools for researchers who want to conduct this type of research. Genetic material from abroad The Nagoya-protocol is an international agreement on access to genetic material from abroad for research purposes and the fair sharing of the benefits. The UMCG complies with this protocol, and the NVWA monitors this. Before the genetic material is obtained, the country of origin must give permission for this use. The provider and receiver together determine how the benefits are distributed. This contributes to the preservation of biological diversity and sustainable use of its components. Ionising radiation (nuclear energy law) Research and actions involving the use of ionising radiation or radioactive substances are prohibited under the Nuclear Energy Act unless there is a specific nuclear energy law licence for this. The UMCG has a global Nuclear Energy Act Complex licence. This type of research and these actions must demonstrably be brought under the complex licence via a written IntP Nuclear Energy. 4. How Ethics Travels: The International Development of Research Ethics Committees in the Late Twentieth Century In this special issue for the European Journal for the History of Medicine and Health (ehmh), we investigate how new conjunctions were forged and segmented between medical science, society, and the state in the second half of the twentieth century in seven different countries, encompassing Europe and North America. We do so primarily through the lens of the research ethics committee, a governance device for the oversight of medicine and science that first emerged in the United States in the 1950s and that has spread throughout the world since. The idea to focus on the origins and development of research ethics committees (rec s) came from an international workshop on the theme, organized at the Department of History of Science and Ideas at Uppsala University in May 2019, where several of the articles in this special issue were first presented.1 In recent years, research ethics committees (rec s) have attracted the attention of historians of medicine and science. Since roughly the 1970s, these bodies have become an integral part of the biomedical sciences. Whereas only fifty years ago, physicians could individually decide whether to conduct an experiment on their patients, today, in many parts of the world, prior permission of an rec has become a sine qua non for medical research involving human subjects, a requirement that is expanding to encompass other scientific fields and research activities as well.2 Without rec-approval, clinical research studies in most countries are forbidden, medical journals refuse to publish any results, and regulatory agencies withhold related market authorizations. Gathered together in Uppsala, we asked ourselves the following questions. Why did that development take place in the second half of the twentieth century, even though the history of human experimentation in medicine goes back much further? Why did so many countries adopt the practice of ethics review in this period, even though it emerged as a distinctly American practice? And why has the practice endured, even though it is widely considered a “frustrating bureaucratic hurdle”? In this special issue, we address these questions, focusing on the political function that rec s – and more broadly clinical research ethics – have fulfilled in several national contexts in the past fifty years. In doing so, we hope to contribute to the mission of the newly established ehmh to offer innovative and geographically diverse perspectives on a topic of central importance in the history of medicine and health: the way societies have dealt with ethically continuous issues in medicine and science, and the centrality of the state therein in late modern history.3 1 rec s: Gatekeepers of Modern Science The growing interest in the history of rec s in recent years partly stems from the surging importance of the field that is the history of knowledge. As Laura Stark points out, rec s nowadays have the power “to turn a hypothetical situation (this study may be acceptable) into shared reality (this study is acceptable)”, and thus to give legitimacy to certain ways of probing into the world and not others. “In so doing, they change what is knowable”.4 Thus, just as scientific journals and funding bodies have increasingly become “gatekeepers of science” in the late-modern period, historians have begun to investigate how the set-up and functioning of rec s have influenced what counts in science as responsible, trustworthy, and even authoritative.5 In a similar vein, historians and others are now beginning to pay attention to the type of objectivities produced by institutionalized ethics review. Whereas the ethical permissibility of a clinical research study used to follow from the disciplinary gaze of individual physicians (i.e., constituting a form of disciplinary objectivity), rec s institute a form of procedural or mechanical objectivity: i.e., that which is considered morally right and fair follows from the correct implementation of standardized protocols and procedures.6 Even so, as Adam Hedgecoe shows in his recent Trust in the System, evaluation ingredients that cannot be standardized by definition, such as trust and “local knowledge”, continue to be key elements in the ethical assessment of research protocols 7 – leading to fruitful new questions among historians about the ways in which late-modern bureaucracies balance methods and procedures with supposedly pre-modern notions such as personal authority and familiarity in the production of scientific knowledge and the generation of public accountability.8 2 The Role of rec s in Liberal Societies More prominently, the rise of rec s in the second half of the twentieth century has been used as a substrate upon which historians have been able to trace the changing societal standing of medicine and science in this period, and the ways in which governments of liberal democratic societies have sought to handle ethically contentious issues in these fields. One prominent line of enquiry, for instance, has revolved around the question of what function rec s were originally intended to fulfill in the governance of medical science. Earlier historical publications, mostly focusing on the United States, frame rec s – or Institutional Review Boards (irb s), as they are called in the United States – as one of the first victories of the American bioethics movement, which emerged in the 1960s as a formerly disparate group of outsiders to the traditional medical profession that sought to provide a much needed public check on the sometimes not-so-ethical conduct of medical researchers and practitioners.9 More recent works have questioned this narrative, arguing that the first irb s were established largely by physicians and researchers themselves, and served “as a technique for promoting research and preventing lawsuits” at a time when the social criticism of medicine and science was mounting.10 Other lines of historical research have linked rec s to the emergence of what anthropologists call “a culture of accountability” in the second half of the twentieth century. 11 In this period, new regimes of oversight were realized in a wide variety of professional domains, subjecting the performance of professionals to regular inspection and obliging them to account for their activities in organized settings. This was no different for physicians: whereas up until the 1950s they had enjoyed a high degree of autonomy in deciding what sorts of interventions were permitted in their research and practice, they were increasingly called upon in the years thereafter to justify and request permission for their conduct in formally arranged settings – including, but not limited to rec s. Finally, for the history of bioethics more broadly, scholars have made much progress in recent years in linking the emergence of this movement to the secularization of society in liberal democracies in the late twentieth century. According to John Evans, the language and procedures offered by bioethics offered an alluring alternative for governments of increasingly pluralistic societies amid the waning of more traditional sources of authority, such as religious traditions. Thus, bioethics flourished because it “met the needs of the bureaucratic state” in a secularizing political climate: i.e., instead of traditionally ‘thick’ approaches to morality in theology, bioethics offered ‘thin’ principles of morality (autonomy, beneficence, justice) that could easily be translated into liberal policies and regulations focused on patients’ rights.12 Likewise, Benjamin Hurlbut has explored the history of bioethics as “an important new element in the repertoire of democratic governance” in the late twentieth century, with at the core a fundamental question: “how should a democratic polity reason together about morally and technically complex questions […]?”13 rec s can be understood as one of those methods for “reasoning together” about medicine and science, a method that puts certain democratic limitations on medical research, and, in so doing, legitimizes its conduct at the same time. 3 The Dominance of the American Account The above historiographical remarks pertain predominantly to the United States. For the American context, the history of irb s has been documented in detail. Stark has traced the birth of communal ethics review back to the headquarters of the National Institutes of Health (nih) in 1953, where the procedure was invented to handle a new practice in medical science: the conduct of clinical research with healthy human subjects.14 David Rothman has detailed how this new model for ethical decision-making was later introduced also in American clinical practice – a development that continues to capture the public imagination to this very day.15 And Zachary Schrag has given a striking account of how this model, originally designed for the governance of the biomedical and behavioral sciences, was introduced in the American social sciences and humanities as well in the 1980s and 1990s – a process that has become known by the rather cynical term “ethics creep”.16 For other countries, however, we still know much less of this history. As Noortje Jacobs has written elsewhere, the American account “has become such a dominant trope in international scholarship that it often functions as a near-universal explanation for the changing governance of human experimentation after the mid-twentieth century”.17 This is a pity. While institutional review boards were first established in the United States, the suggestion that the American model of ethics review was unilaterally implemented across the globe ignores the various forms and functions that rec s have taken in various national contexts, including the different uses that national governments and professions have found for rec s in their attempt to hold biomedical researchers to account, while preserving their professional autonomy to engage in human subjects research. In recent years, therefore, scholars from various disciplines have called for more nuanced empirical studies that move away from the American story to show the varying forms and functions that bioethics and research ethics governance may take in different historical contexts.18 By detailing the rise and development of ethics review in seven national contexts, we aim to bring into focus the international dimension in the history of institutional ethics review. On the one hand, the articles brought together here show that rec s emerged for very similar reasons in distinct national contexts, i.e., the growing dominance of American-led transnational funding and publishing networks in the second half of the twentieth century.19 At the same time, they show the variety of the different political solutions that rec s were meant to provide in different national contexts for the perceived ethical problems with human subjects research. An important thread running through this collection of articles is the international rise and spread of rec s in terms of isomorphisms and pseudo-isomorphisms. In their by now classical neo-institutional text, Paul DiMaggio and Walter Powell introduced the concept of institutional isomorphism to explain the “homogeneity of organizational forms and practices” that comes about due to external environmental factors such as government mandates, financial threats, uncertain economic markets, or drives for professionalization.20 This concept, which is further detailed in the individual articles by Helena Tinnerholm Ljungberg and Noortje Jacobs, enables us to understand better why and how different countries all started to adopt a governance structure that was originally specifically designed for the handling of clinical research at the nih – famously labelled by Laura Stark as “an ethics of place”.21 At the same time, as Adam Hedgecoe has convincingly argued on numerous occasions (one of them being the 2019 Uppsala workshop), an exploration of the heterogeneity in the development that followed the initial institutionalization of rec s in different contexts, might also require us to talk about pseudo-isomorphism: the notion that various versions of ethical review look similar on the surface and even share similar origins, but have still developed differently over time. Why was it then that the rec, originally an American organizational structure for the governance of human experimentation, was adopted by so many other countries in and after the 1970s, despite its bureaucratic and legalistic elements being widely regarded as “typically American”? And how did this “imitation process” actually play out in practice? In exploring these questions, the articles in this special issue show that, on the one hand, the rise and spread of rec s ought to be understood as an example of the international standardization of medical ethics in the postwar period, under pressure from the ongoing globalization of biomedical science. On the other hand, we will show that the various national incarnations of the rec, while seemingly almost identical on the surface, are revealed upon closer investigation to have taken on surprisingly different characteristics. This special issue contains examples of this variability in Sweden, the Netherlands, Canada, Switzerland, Germany, the Soviet Union, and perhaps surprisingly also in the United States. Where Tinnerholm Ljungberg (Sweden) and Jacobs (the Netherlands) point to the isomorphic influences of extra-national funding and publishing practices in the 1960s and 1970s, Magaly Tornay (Switzerland) and Fedir Razumenko (Canada) show how concerns voiced by the international medical community in this period prompted the establishment of the first rec s in their respective countries. International elements are shown to have had a defining role in the realization of a new oversight system for human experimentation in medicine in these countries, but these articles also illustrate that this was not the end of the story, but rather the beginning. In each national context, various tensions arose about the societal and political functions that rec s were meant to fulfill and the various solutions that emerged in each case. Matthis Krischel provides in his contribution a critical overview of the parallel development of research ethics governance in Germany and points to relevant differences between West and East- Germany at that critical juncture in the country’s history occasioned by reunification. Then, Pavel Vasilyev, Aleksandr Petrenko and Veronika Tayukina show how the ussr and its larger ideological principles of the centralized state pre-empted the need for rec s and instead took a different approach to research ethics governance that may be seen as a well-articulated alternative to the Western model. Finally, the special issue returns to the United States, with Sarah Babb’s article showing that – there also – institutional ethics review has taken on very different guises and political functions than those with which it was originally conceived in the 1950s and 1960s, predominantly under the influence of the all-pervasive market logic that reigned supreme in the United States in the late twentieth century. We are proud to have brought together these new geographical perspectives into this special issue for the ehmh. Nonetheless, we fully acknowledge that an even wider geographical representation, including contexts from all over the world, remains crucially important, and will yield new insights. Important work in this direction has already been conducted by scholars such as David Reubi, Wen-Hua Kuo, and Rachel Douglas-Jones.22 We hope this special issue will inspire further research, as part of a general movement toward the goals articulated by Frank Huisman in his comments on the establishment of the ehmh in its first issue: “Each culture and each nation has a history of its own, and this should lead to specific historical narratives. Only when we reach a stage where Continental histories abound, are we capable of transcending those narratives, able to make real comparisons between national histories, and come to a deeper understanding of medicine and health in society.”23 By focusing on geographical variations among rec s we incorporate and contribute to recent comparative research and theories on the relationship between medical science, states, and societies.24 By taking rec s as our focus, we pinpoint the intersection between these three areas and explore the travels of an American governance structure that was once locally designed to make a very specific type of research ethical, but that went on a remarkable world tour in the latter half of the twentieth century – a journey which continues to this day. 5. Guidance Ethics Approach. An ethical dialogue about technology with perspective on actions CHAPTER 1: Guidance ethics An interpretation of technology ethics as accepting or rejecting technology places technology and society in opposition. In that approach, technology poses a potential threat to society and it is the responsibility of ethics to determine which technology may be allowed and which may not. However, this picture is not correct. Technology and society are fundamentally intertwined. Technology is developed by people with a view on a certain role of that technology in society. And society has always taken shape through interactions with technology, often in ways that were not explicitly intended by designers. For example, the printing press did not only bring the possibility to reproduce texts more easily, but also contributed to the reformation, the emergence of modern science and universities, the importance of knowledge, et cetera. We are just as connected with technology as with language or gravity: technology helps to make us the people we are. Technology and society shape each other: that is the lesson we can learn from the past 50 years of research in Science and Technology Studies. This interconnectedness of technology and society entails a different role for ethics. The standard model of 'ethical assessment', in which normative theories help to decide whether a technology is acceptable or not, does not do justice to this interconnectedness. In most cases, the question is not whether a technology should be allowed or not, but how we can deal with it in a responsible manner. Moreover, a consequence of the interconnectedness of people and technology is that also the ethical frameworks with which we assess technology develop in interaction with that technology. What we understand by 'privacy', for example, is developing hand in hand with the technologies that are shifting the boundary between private and public. Instead of seeing ethics as some form of 'assessment', then, it should also be seen as the normative 'guidance' of technology in society. And at the same time, ethics can also guide society in dealing with technology. Such an approach does not place ethics outside of technology, as an external 'assessor', but right in the middle of it. It is 'ethics from within', not from the outside. This type of ethics is not primarily focused on the question whether a technology is acceptable or not, but rather asks whether and under what conditions a technology can be given a responsible place in society. The central question in guidance ethics is not 'yes or no?', but 'how?' It does not focus on rejecting or accepting, but on the valuable design, implementation and use of new technology. Central to this guidance ethics is the inventory of the possible social implications of a technology, and the central values that are at stake. This is done in a deliberative process. In the guidance ethics approach, it is important to always start from concrete technologies and their specific effects and consequences. It is not about making generic analyses of 'digitization' or 'artificial intelligence' as such, but about the concrete applications thereof in a societal domain. After all, the concrete interaction between humans, technology and society has a central place in this ethical approach. When making an inventory of effects, it is important to not only look at the consequences for individual users, but also at the social implications for, among other things, education, healthcare, the judiciary, legislation, policing and law enforcement. Moreover, technologies also influence frameworks of interpretation: they help to shape the meaning of central values such as privacy and autonomy. CHAPTER 3: principles for the implementation of guidance ethics The how-question is central Humans and technologies are connected and continuously influence each other, as two partners in a dance. The core of guidance ethics is therefore, unlike many other ethical approaches, not the human assessment of technological development. The core is the interaction, the 'how' question, instead of the 'yes or no' question: how can people and technology develop in a valueable way? When we look at the case of the feeding robot, the primary question is not if we should use a feeding robot or not, but how we can use it the best way. Can we have a debate on how we can deal with the feeding robot instead of merely discussing what is right and wrong? Can we find options for action for good use? Instead of the question 'can we delegate care for vulnerable people to machines?' the question is: 'is there a way to take the core values in care as a starting point when developing and using care robots?'. The focus on the how-question does not mean that it is never possible to say 'no' or that in principle every technology can be used or introduced everywhere, however, the focus is not primarily on the 'yes or no' judgment. That question is quickly limiting, which means that options for action are not discussed. Also, the values or interests of certain groups are often not sufficiently taken into account. Focusing on the how-question makes it possible to actually connect ethics with technology. This question makes room to look for conditions under which a technology can function in a responsible manner. And those conditions are in the design of the technology itself, in its social embedding and in the way people use it. But in the end, 'not' also remains an answer to the 'how' question. If it appears that the technology is not compatible with our values, the model's outcome is that the technology should not be used. Small, continuos steps Closely connected to asking the how question is the awareness that improvement comes in (small) steps. If one is looking for a 'yes or no', 'is this allowed or not', one is looking for the ultimate answer. The guidance ethics approach allows us to see that the interconnection between human beings and technology develop in small, continuous steps. Technologies always adapt to how human beings interpret them, but users also adapt to the new possibilities. There are all kinds of small steps in the development and implementation of the feeding robot. The robot is becoming more and more precise, partly based on the input of the users. In addition, the role of the feeding robot in the care process and what is expected of the care professionals is changing. Some ethical approaches believe that the development of technology and its impact on human beings can be stopped and that ethics should determine when it has to be stopped. From the guidance ethics perspective, we think that this situation rarely occurs. What is possible, however, is a continuous consideration of what can be improved and how to take concrete steps to achieve that. Technology in context Because humans and technology are so closely connected, it makes little sense to speak about technology without involving human beings and their context. Indeed, it matters where and by whom the dialogue is conducted. With guidance ethics, we want to start a discussion about concrete technologies that function within a specific context, with real people. That is why we rather not talk about 'Technology' or 'Human Beings' as if these were single, well defined concepts. Discussions about AI or block chain can be interesting, but only become relevant when they touch on practice. This puts demands on the level of abstraction that we choose when talking about a technology. The case of feeding robots is a good example. We are not talking about 'robotics' (too general) or about a brand of robot of a certain serial number (too specific), but about 'feeding robots'. More specifically: feeding robots within a certain context, in this case an institution for care for the disabled. Lessons learned will (in large part) also apply to the use of other feeding robots in other institutions. But they may also apply to a wider e-health context, or to wider or more limited applications of robotics; yet very specific for this particular institution and this particular feeding robot. In fact, we cannot assess without context. The patterns observed in undermining are only meaningful if agents and policy-makers can do something with it, if analysts give an interpretation of it and if conversations with residents and other stakeholders explain how they see them. Ethical tension exists both in the network around the technology and in the data analysis itself. Human values The purpose of guidance ethics is to give human values a guiding role in the development, implementation and use of technology, ranging from justice, autonomy and speed to sustainability, safety, effectiveness, et cetera. Emotions, both positive and negative, can play an important role in the search for the values that are central to a certain technology. Fear, enthusiasm, astonishment, concern: these are all indications that the technology is putting something valuable at risk or enables it. As a result, emotions are an indicator of the normative frameworks that should be given a place in the design, implementation and use of this technology. Which values are relevant, depends on the specific technology and context. In Chinese culture, for instance, different values prevail than in the culture of the United States of America, and other values prevail in healthcare than in construction. Moreover, the advent of technology can change values. Before the introduction of mobile telephony, values such as 'reachability' and 'attention' had a different interpretation than nowadays. There is almost always tension between the different values that play a role in certain technology in a specific context. In Facebook's newsfeed, there is tension between the freedom to publish anything and the negative consequences of reinforcing statements that are less relevant or even untrue. In the undermining project, there is tension between the desire to obtain the most accurate information and the protection of the privacy of individuals. The pursuit of a value-able co-development of technology and human beings is, in short, complex. It is important to recognise this complexity and to find a way to deal with it. Reasoning based on one specific value, without being aware of this complexity, does not bring much. What will the future bring: in a positive and in a negative way We do not know what new technology will bring us or where we will bring technology. When we look back at future predictions, they often turn out to be wrong. The rise of the internet was hardly predicted in the 1950s, but flying cars were. Nevertheless, images of the future are needed, they are an important part of the interaction between technology development and social change. Many ethical discussions focus on the possible disadvantages of a technology. In the guidance ethics approach, we assume that technology has both positive and negative consequences. It is important to give both enough room in a dialogue. If we only talk about how a feeding robot can never offer patients human contact the way a healthcare professional can, we ignore the opportunity the robot offers to patients to regain something of their human dignity by being able to eat by themselves again. There may be a parallel here with movements in other scientific areas: positive design (designing for new possibilities), positive psychology (focusing on how someone could thrive instead of focusing on someone's problems), positive health (not just focusing on what's wrong and not possible, but also on what is possible). Guidance ethics aims to move beyond 'negative ethics', centred around negative aspects that should be avoided or prevented, towards 'positive ethics', centred around the values that should be fostered in the design, implementation and use of technology. Action options The guidance ethics approach looks for concrete options for action in order to achieve a more valuable interaction between people, society and technology. We distinguish three types of options for action to achieve that valuable technological-social development. Designing technology: ethics by design Every technology has built-in values, so to speak, technology invites certain behaviour. A technology can therefore be designed in such a way that it better matches certain values. For example, the value of privacy can be guaranteed by allowing the user to control the cookies that are stored, or by giving cameras on crowd control drones a low resolution that makes it possible to count numbers of people but not to recognise individual people. Environment: setting up the environment (physical aspect) and making agreements (social aspect); ethics in context Every technology is used in a context: physical, social, organizational/ legal. With new technologies, that context/environment is also adjusted. The increasing use of the car entailed the construction of (physical) sidewalks and traffic lights. Also socially, new agreements were made: pedestrians on the sidewalk, cars on the road, pedestrian crossings and refuge hills as safe places for pedestrians. These agreements have also been legalized through traffic laws. User: awareness and behaviour adaptation: ethics by user When it comes to the use of technology, people can display more and less valuable behaviour. In traffic, for example, awareness and training take place through traffic lessons at school, driving lessons can lead to a driving license and awareness is created by designated driver campaigns and traffic signs with children playing on it. In this chapter, various elements were discussed that are important for the practical translation of guidance ethics. These elements are the building blocks for the approach presented in the next chapter. CHAPTER 4: Explanation guidance ethics Stage 1 Case: technology in context Describe the technology and the context in which that technology functions. The point is to get a close understanding of what we are dealing with. Some discussions about technology are more about (positive or negative) images of that technology than about its actual functioning and its meaning for people. Because we opt for a focus on a concrete technology in a concrete context, we are able to come to a fairly precise description, both of the technology and of what the use of technology means in its context. The point is to make a clear, understandable description, without too much jargon or technical details. The description must be understandable for interested outsiders. Stage 2 Dialogue: actors, effects, values This step focuses on a further elaboration of the case. After having developed a closer understanding of the technology and its context, we need to investigate the possible effects of using a technology in that context. We want to know who is involved and which values play a role in the practices around the technology and its potential impact and implications. Actors In a specific context, it is usually quickly clear who the relevant actors are. The parties involved may also be asked who else could be relevant actors. At a generic level, relevant actors often include clients/citizens, professional users, policy-makers, designers. For example, healthcare cases typically involve patients, care professionals and caregivers as relevant actors. Ideally, the people actually involved have input in the dialogue. They don't always have to be people who are actively involved in the use of technology. The use of a technology can also have a huge impact on nonusers; think of the influence of cars on pedestrians. If it is not possible for everyone involved to participate, there may be people who represent them or who are willing and able to think from their perspective. Academics or other experts having experience with or expertise in the subject can also be involved, to develop a broad social perspective. Effects The use of a technology has all kinds of effects. Some effects can be immediately clear, where others might occur, for example in the future or under specific circumstances. The first step is to collect the potential effects of the technology as openly as possible, without taking desirability or likelihood into account. Next, it is good and pragmatic to identify which effects are most relevant. The dialogue continues to be based on concrete technology and context, and from there various potential effects are discussed. Distinguishing different effects can help in obtaining a rich and realistic image: positive and negative effects, known and foreseeable effects, direct and indirect effects, effects for different actors, effects on different levels: individual (micro), social (meso) and social (macro). Values Technology is always surrounded by different values. Think of justice, applicability, reliability, solidarity, respect, autonomy. They often remain implicit in discussions, because criticism of technology is typically formulated more concretely. If the AI application in the GGZ institution makes someone feel that they are being monitored, the underlying value may be autonomy, or privacy. In most cases, several values play a role simultaneously. As with the effects, the first thing to do is to make an open inventory, followed by the identification of the values that are considered to be the most relevant, whereby it remains important to keep an eye on the 'less relevant' values. At present, various ethical codes or guidelines are being written in many sectors, in companies and by the government. A small grasp of AI alone leads to the following examples: Google AI Principles, Microsoft AI Principles, UK initial code of conduct for data driven health and care technology, European Commission High Level Expert Group on AI, Smart Dubai AI Principles, Nesta principles for public sector use of AI, Code of conduct AI the Netherlands ICT, AI Impact Assessment (ECP), Responsible Innovation: 7 principles for the public sector (Ministry of the Interior and Kingdom Relations). As an example, the text box shows the list established by the EU High Level Expert Group on AI. For the guidance ethics approach, these are sources of inspiration to identify the values playing a role in technology in context. Dialogue This stage is called 'dialogue'. In an open exchange between the actors involved, it becomes clear what the possible effects and important values are regarding the use of a technology. A dialogue is an important part of the approach and often takes place in a workshop setting, certainly if the parties involved feel a joint responsibility for a technology and see that they need each other to take the next steps. Parts of the dialogue naturally also take place outside the workshop. The conversation continues. In addition to and in preparation of the actual dialogue, other means may also be used. Interviews provide a different kind of insight into how the discussion partners perceive the effects and values. Literature research can often be a valuable addition as well, certainly if followed by a good analysis. A good dialogue stage has several types of outcomes. First, it often takes away uncertainties. The input of different types of knowledge gives everyone a better picture and therefore a better idea of the possible effects. By thinking through the effects, it becomes clear where the expectations and fears lie and, possibly, how realistic they are. Secondly, the actors bring in different perspectives, which brings up the most important values that play a role in this technology within this specific context. This often leads to mutual understanding, because it enables people to put themselves in the perspective of the other. They do not necessarily have to agree with the importance of the values of the other, but can better understand the importance that the other attaches to them. There are also other methods available to support such a dialogue stage, such as the methods of a 'moral deliberation', Socratic debate et cetera. In fact, this stage is valuable in itself, besides its role in the 'Guidance Ethics' approach. Within the guidance ethics approach, though, this stage is an essential bridge between the first stage - technology in concext - and the third stage: the identification of options for action. Stage 3 Options for action The core of the guidance ethics approach is to guide technology in society in an ethically valuable way and to guide society in the ethically valuable embedding and use of new technology. This requires action. That is why the emphasis in the guidance ethics approach is not only on having a good conversation or establishing an ethical code, but also on arriving at options for action. Three types of options for action are available in the guidance ethics approach: connected to the technology, to the context and to the user. After a brief explanation, a box will give examples of options for actions in each of the the four cases of chapter 2. Technology, ethics by design Ethics by design has been in the spotlight for some time. It has now become a best practice to include moments of ethical reflection in the design process of a technology, so that ethical values are actually included in the design of technology. Ethics are not only a matter of human beings, but also of technologies. Every technology influences the choices and behaviour of human beings. An 'ethics by design' approach deliberately shapes that influence, based on explicit ethical reflection. Ethics by design sometimes happens without being labelled as such. In a well-functioning market, customers indicate what they expect from a product and producers will try to adapt to it, including the values that go with it, such as safety, sustainability, aesthetics, applicability, et cetera. However, not all markets are perfect and not all social practices are markets. In healthcare, for example, customers have little purchasing power, because this power has been handed over to insurance companies; and with digital media platforms, monopolistic situations often arise. Obviously, governments can play a role here: they can impose conditions and requirements through legislation and regulations. This is a fairly slow and not very accurate instrument, though. Moreover, not all values can be optimally translated into market mechanisms. This makes it interesting to also try to enable designers to incorporate ethical values into their work at an early stage. Environment, ethics in context The ethical dialogue about technology often focuses on that technology, while environment or context in which that technology functions is just as important. That context is often also different. It makes a difference when Facebook is used by someone aged 13, 35 or 80, and whether it is used in a work context or privately. It makes a difference whether an AI application is used in healthcare or in the construction industry. It is therefore important to pay more attention to that environment, also because it contains part of the solutions. When developing technology, a designer and a user will have to think about what that means for the system in which that technology is applied. An organization or a system will change after the introduction of a technology. How is this change shaped? Can it be shaped in a way that takes into account the values that were identified? The adaptation of the environment can be physical, social or legal. Think of the introduction of the car, a new technology at the time: it entailed physical adjustments such as sidewalks, roundabouts or traffic lights, social adjustments such as mutual agreements in traffic, and legal adjustments such as traffic law. In 'ethics by design', designers and (tech) companies often hold the key, but when it comes to adapting the environment, decisions are mainly made by organisations (meso) and the government (macro). A company that imports robots, will make adjustments for the safety of employees, for training, new processes, et cetera. Examples from the government side are the construction of roads, the introduction of the general data protection regulation (GDPR) and the setting of preconditions for a personal health environment, by setting (MedMij) standards. Proper use, ethics in use Also the user plays a central role in shaping the social impact of a technology. People can handle technology with care or recklessly, can be well trained or poorly trained. The first step is awareness. What does a technology do, what can it do and what can I do as a user? Action options can take the form of information or awareness campaigns, but one can also become aware of a new technology and its implications through school, word of mouth, news carriers. The second step is actual behavioural change. Often that means training and exercise. This can vary from reading a manual to taking a course. Sometimes, behaviour is not difficult to carry out, but may be difficult to change. It requires a different habit or discipline, for instance not to drink and drive. Everyone can do it, but it's about the acceptance that this is indeed dangerous behaviour and that it is not uncool not to drink. On the other hand, getting a driver's license is an example of something that requires a lot of training for most people before the use of the technology is safely mastered. The users in a variety of roles We see that the user shows up in different roles for all three options for action. In discussions, those roles are often confused, so we want to name them here again. The key in ethics by design, is that the user has a say in the design of technology. The key in context and environment, is that the user can participate in the discussion about how that environment is adapted. There are plenty of examples of computer systems that are introduced without consultation with the employee, with great frustrations as a result. Proper use is about how users deal with technology and context. The user may show more or less desirable behaviour. We see those roles in internet banking, for instance. Technologically, payment via the internet becomes increasingly easy. This ranges from a better interface via the website, to the introduction of an app on the phone, the possibility of not only paying, but also sending payment requests (tikkies). An example of setting up the environment was when a bank wanted to resell customer data anonymously. That resulted in a lot of opposition. The technology made something possible, but clients set preconditions. They did not want their data to be shared, even if the data were anonymized. The use of internet banking has become increasingly easier, but many (elderly) people had to learn it, with the support from, for example, senior citizens' unions. The banks also try to encourage 'good behaviour', for example with the campaign where people are encouraged to look at the 'lock' symbol on the website, to check whether they are on a secure site. 6. The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging Technologies 14.1 Codes of Ethics: Solution to What Problem? At the end of this volume, after all the rich conceptual analyses and case studies, a step back shall be taken in this fnal chapter. The question is the place of codes of ethics in the professions of engineering, computer science, and health care. To this end, I will focus on responsibility as a frame of reference because dealing responsibly with emerging technologies is a challenge common to the different applications covered in this volume. Codes of ethics are an eminently important approach to making the concept of responsibility of scientists, researchers, and engineers’ work. One way of taking a step back is looking at historical developments. Until the 1990s, the dominant narrative was regarding technology as value neutral, just as a set of instruments and tools, which could be used for morally good or bad purposes. In this case, having codes of ethics for engineers and scientists for strengthening their moral capability wouldn’t make sense, except for ensuring scientifc integrity. These groups would be free from any responsibility for society except doing their job well, while the users of technology would have to bear the full responsibility. A code of ethics for the users of technology could have been regarded as sensible, but not for its makers. This story might sound like a weak and strange echo of a time long ago. In the meantime, numerous case studies have uncovered the normative and value background of the design and development of technology, making it the subject of ethical refection (e.g., van de Poel 2009; Radder 2010). Consequently, engineers and researchers have been pulled into moral reasoning in many felds of technology, e.g., at the occasion of Artifcial Intelligence (AI), genetic editing, or care robots. A similar story holds for science. While previously science was regarded as valueneutral in positivistic interpretations, many scientifc developments obviously incorporate heavy moral questions. Examples are the atomic bomb with the subsequent debate on the responsibility of physicists, the genetic modifcation of organisms followed by bioethical discourses and public debates, the cloned sheep Dolly in 1997, and the birth of twins in China after intervention into their germline in 2018. In recent decades, we witnessed a boom of words and notions such as ethics, values, ethics commissions, scientifc integrity, vaulted by the postulate for responsibility. Compared to the dominance of the value-neutrality narrative of science and technology of earlier decades, today, the opposite story seems to be dominant. Ethics and values are seen everywhere in new science and technology and their consequences. Convictions that ethics and responsibility are crucial for steering the development of research and innovation towards a good and sustainable future are widespread. The approach of Responsible Research and Innovation (RRI), sometimes also shorter Responsible Innovation (RI), expresses this emphasis on ethics and responsibility (von Schomberg and Hankins 2019) but also the optimism that shaping technology by involving ethics will be possible. Approaches such as Value Sensitive Design (Friedman et al. 2006) and Design for Value (van den Hoven et al. 2015) are explored, aiming to make RRI operable by fostering the cooperation of ethicists, engineers, and computer scientists. The frst wave of the ethics of responsibility on science and technology (e.g., Jonas 1979; Unger 1994) was more or less a philosophical endeavor of creating awareness. Currently, responsibility has become an issue of research policy, computer science and engineering, funding agencies expecting sound ethical conduct, and public awareness of ethical issues, often followed by the public claiming involvement of stakeholders and citizens. This development was prepared in the huge United States and international research programs, the human genome in the 1990s, the human brain program a decade later, and the National Nanotechnology Initiative (cp. Grunwald 2014a). In Europe, the frst approaches to the already mentioned RI and RRI movements emerged about 15 years ago. An ethical code of conduct was among its frst results: the code of conduct for responsible nanosciences and nanotechnologies research approved by the European Commission. The feld of codes of ethics also fourished beyond this widely perceived event. Many professional associations but also institutions such as universities and hospitals developed and adopted codes of ethics. This development gives rise to questions about the intentions, motivations, purposes, and impact of codes of ethics such as: What can be said about the specifc responsibility of professions such as engineers, computer scientists, medicals, care personnel in the context of emerging technologies? What roles and places do codes of ethics have in professional ethics, e.g., of engineering? How do they relate to object-oriented felds of ethical refection such as the ethics of technology or the ethics of care? Why are they needed, what problem can they solve, and which challenges can they meet? How can codes of ethics “make a difference”? Do they make a difference, and, if yes, what difference? Can this