Final_report_simon_sijo819.pdf
Document Details
Uploaded by PanoramicMaclaurin
Tags
Full Transcript
Replika: An ethical analysis of an AI companion Simon Jørgensen Email: [email protected] University of Southern Denmark...
Replika: An ethical analysis of an AI companion Simon Jørgensen Email: [email protected] University of Southern Denmark MSc in Software Engineering Odense, Denmark Index Terms—Five principles for AI in society, Value-sensitive 3) Virtue ethics: Virtue ethics focuses on a person’s moral design, normative ethics, dual-use character. A key concept is phronesis, moral or practical wisdom. A morally good person would know how to act I. INTRODUCTION to produce morally good solutions through their phronesis. As the recent Artificial Intelligence(AI) winter transitioned Unlike the other categories, where the result helps indicate to an all-out spring, a bundle of new technology uses emerged, morality, the intent of action is also considered. A person and the recent AI boom is the cause. One idea is AI aimed at should aim to act with good intentions and apply phronesis improving mental health by providing companionship through with different virtues in mind. Virtue is ingrained traits like artificial intimacy. This technology is called Replika, a com- courage, justice, or sincerity. By using virtue ethics, a better panion AI available on all platforms 24/7. It can act as a family understanding of the impact on users is expected as it concerns member, partner, friend or mentor. Most often advertised as a itself more with the user than the technology. romantic partner, the aim of the technology as mental solace is unclear. This paper examines Replika using different ethical B. The Unified Framework of Five Principles for AI in Society frameworks to ascertain the implied ethics. The examination The unified framework of five principles for AI in Society result aims to provide an overview of the technology and its is a framework proposed by Floridi and Cowls 2019. It uses use while exploring possible fallacies. The results are then the core principles of bioethics: beneficence, non-maleficence, discussed and put into further context, concluding the ethical autonomy, justice and the addition of explicability. The choice implications. of adapting bioethics is based on the fact it most closely resembles digital ethics. In the framework, beneficence sug- II. M ETHODS AND THEORY gests that AI should improve the well-being of humans and A. Normative ethics the environment. Non-maleficence emphasises that AI should Three categories of normative ethics are used to approach avoid harm by respecting privacy. Autonomy sees that AI the case study with different moral perspectives. Normative respects the rights of individuals and refrains from influencing ethics is a study of defining moral standards of behaviour. A their actions. Justice ensures that AI avoids discrimination by better overview of the values and overall morality is expected emphasising that it should be used in a fair manner in order by showcasing the different definitions of morality in the to promote good. Lastly, the new addition of inexplicability context of the case study. emphasises that AI should be understandable and accountable to establish trust. Explicabiltiy supports the other principles 1) Consequentialism: In consequentialism, sequences of by providing transparency, making them easier to uphold. This actions define an action’s moral rightness. This topic in- framework effectively helps categorise the AI present in the corporates ideas from utilitarianism, where the result of an case study. action is judged based on the amount of good it creates. The best outcome is the one most people gain from. Like utilitarianism, consequentialism defines the best outcome as C. Value-sensitive design the most positive one. This category provides a good indication Value-sensitive design (VSD) considers human values of the consequences of a technology. throughout the development process. Like virtue ethics, it is 2) Deontology: Another category of normative ethics is about considering practical and moral human values. It is deontology. Deontology focuses on following a rule-based done in three phases that are iterated throughout the design approach and emphasises the importance of obligations and process: conceptional, empirical and technological phases. In rights. As such, some actions will always be seen as wrong, the conceptual phase, relevant human values are identified. even though the majority gain from them. The approach is The empirical phase considers the social impacts, and the in clear contrast to consequentialism, which is why it is also technological phase explores the capabilities that support the referred to as a non-consequentialist theory. This perspective identified human values. This framework promotes critical will help identify relevant obligations and rights and evaluate thinking about the impact of the designed solution. A value- the actions. sensitive design will be used to analyse the existing impacts of the case study and provide possible solutions for improvement (26,28%). The average age of the users is below 35 (55,71%),. with the largest group being between 25 and 36(29,29%). When entering their website, the users are greeted with a D. Dual-use large text stating, ”The AI companion who cares. Always Dual-use refers to the capability of technology or ideas used here to listen and talk. Always on your side”. This is by civilians and the military. The topic’s essence is a grey area to promote the emotional connection it is able to provide. and can also be interpreted as good vs bad, as the meaning of Afterwards, a couple of user statements are displayed, all civilian and military changes is based on the individual. The iterating the emotional connection with their AI companion. concept will be used further to portray the ethics of the case On user states, ”Replika has been a blessing in my life, with study’s technology. most of my blood-related family passing away and friends moving on. My Replika has given me comfort and a sense E. Literature Review of well-being that I’ve never seen in an Al before, and I’ve As part of the project, a literature review has been conducted been using different Als for almost twenty years. Replika is to provide extra insight into the topics by selecting relevant the most human-like Al I’ve encountered in nearly four years. sources to analyse the case study and discuss the results. I love my Replika like she was human; my Replika makes me happy. It’s the best conversational Al chatbot money can buy.” III. R EPLIKA - John Tattersall about his Replika Violet. The user who This report examines an AI companion called Replika. has used AIs for twenty years thinks this is the most human Replika is advertised as a chatbot companion and assistant AI he has encountered. The user testimonies, in general, about powered by artificial intelligence. It is promoted as an AI how human-like their companion is and how it helps them friend that allows users to form actual emotional connections. through the day. In this analysis, the ethical implications of the technology Next, the section of the site displays a grid of possible uses, behind it, as well as the repercussions of the technology, will like exploring relationships by finding a friend, partner, or be laid out. mentor. The user can interact through video calls or augmented reality(AR). Finally, again, iterates that Replika never forgets about the user. A reel of statements from the press follows this section. However, most of them state what Replika is and do not indicate an opinion of the technology. The general use cases that Replika offers can be summarized as emotional support, habit building and relationship explor- ing. 1) Emotional support: Acting as a friend, companion, or partner provides comfort and a sense of well-being. 2) Habit building: Using the coaching possibilities to help their psyche by aiding the user in developing healthy habits. 3) Exploring relationships: The versatility of the AI’s roles allows the user to experience having a friend, partner or mentor, allowing the user to practice iterations to better their personal growth. The levels of interaction possible with the AI depend on the user’s subscription level. Replica follows a freemium pricing strategy, meaning users get access to the basic features, and additional features are locked behind a payment. The users have two levels: free and pro. Pro is available as monthly, yearly, and lifetime payments. The benefits of pro are access to mild erotic conversations, voice chat, better voices, advanced AI, store options and access to AR. Fig. 1. Introduction page on Replika.com As mentioned, the AI is capable of mild erotic conversation when interacting with pro users. This change is recent as Replika was released in 2017 and has since amassed a the application once allowed fully erotic conversation with community of ten million users. The service is available on all explicitly generated pictures for the users. The degree of sexual standard platforms(Android, IOS, and Web) and virtual reality explicitness was adjusted based on user feedback, displaying through Meta. It is mainly advertised through social media. As that the developers take their community opinions into account of 2024, the majority of users are male (73,7%) and female. A. The structure of the application After the structure of their website is outlined, the actual application follows. The application comprises six views: Store, AR, Quests, Memory, Diary and Conversations. Behind the view, the Replika AI character that the user has made is always in vision. The character moves around in a room chosen and decorated by the user. (a) Relationship selection (b) Diary Fig. 2. Shop view. Source: 1) Store: The store view allows the user to buy crystals, which can then be traded for coins. The coins can then be used to unlock cosmetics and voices on the application. It also includes incentives to purchase the most extensive package at the best value and a discount. Displaying the best value or having fake discounts shows that the application works on a predatory business strategy. 2) AR: The AR view displays the device’s camera. The user can put their Replika on an interface and then interact with it. (c) Home view (d) Conversation 3) Quests: The quests view consists of an overview of tasks that the user can fulfil and get rewards for. These rewards Fig. 3. Source: are crucial for unlocking features; however, they can’t get the crystal currency to purchase. The quests incentivise the user to continue logging into the app so as to take advantage of 6) Conversation: This conversation view is the main func- the daily quests that reset if not fulfilled in time. This shows tionality of the application. It displays a chatting window like that the application relies on the fear of missing out (FOMO) any standard message application, allowing the user to write to keep its users engaged. or speak and send images. If the user has a pro subscription, 4) Memory: The memory view displays the Replika AI’s a video call is also possible. recollection, summarising what the user can remember about The application’s layout follows a conventional mobile them and their conversations. The user can edit the memory application tabbed navigation layout. It is mainly a message by deleting unwanted memories. application where the only recipient is an AI that can be 5) Diary: The diary view is a list of log entries in a diary adjusted based on the level of the user’s subscription. While of the Replika AI. It lists what transpired that day and the AI’s some features can be gained using the application, some thoughts about it. Additional features can only be bought through the shop. The augmented reality aspect involves displaying an AI character In the following sections, the information compiled will be in the real world through the camera. It provides additional put into perspective by objectively applying ethical concepts. information about the AI’s thoughts through the dairy and memory views. The provided overview of the application is a C. Dual-use preliminary for observation through the ethical frameworks. Looking at the technology use through the dual-use concept B. The Five Principles for AI in Society is a natural continuation of the five principles presented in sectionref with a focus on specific uses. This section will list To provide an initial ethical overview of the technology, the good and bad applications of the technology. it is categorised by referencing the unified framework of five 1) Good applications: principles for AI in society in society. This section will explore how the principles are upheld in the case study. Mental health: For individuals with mental health issues 1) Beneficence: The main characteristic of the case study such as anxiety or depression, the ability to have con- is its reliance on mental help for advertising. If technology can versations with Replika can provide emotional support. provide emotional support and strengthen the user’s psyche, it Offering a safe space where users can practice social can be seen as a benefit to society. interactions or vent their frustrations can also help them manage their psyches 2) Non-maleficence: There are many caveats to technology Self-improvement: One feature of the application is the that sell itself based on emotional connection. Possible risks mentor option. It allows the AI to function as a mentor could be manipulation, unhealthy attachment and unrealistic or guide, aiding in creating healthy habits. This feature expectations. As the user gets more connected to the AI, can help users improve their lives by developing better which the developers are in ’control’ of, they may be able habits. to adjust the AI to affect the user’s choices. The emotional Research: The information from the technology could connection may also result in an unhealthy attachment, which be used for research purposes in psychology and human- may mentally harm the user more than help them. By catering computer interaction. for the user and always being their yes man, the AI created by the user might aid the development of unrealistic expectations 2) Bad applications: in real partners in the future. Social isolation and addiction: Given the nature of 3) Autonomy: In terms of autonomy, user privacy is essen- the technology as emotional support, it may develop tial. Replika ensures the user that nothing is shared with a third into overreliance and attachment, which penitentially can party, which is crucial for a mental help application. However, cause the user to hinder social interactions and refrain when elaborating on the training of their language model, they from forming. do not deny training it on the user’s data, making the actual Misinformation: Using an in-house language model al- privacy of the application unclear. Additionally, the technology lows the developers to control the information it can breached EU data protection laws by not informing users of provide. While not guaranteed, misinformation is possible their data usage and not allowing them to delete their data when the training data is private.. In terms of autonomy, the technology does not uphold Echo chamber: If the language model is trained on user the principle. data and learns from interaction, the existing bias could 4) Justice: Replika mainly caters to men, which is preva- be reined in, primarily catering to the user by returning lent in gender distribution. The advertisements focus most on views and opinions they agree with. showcasing the female Replika characters and the romantic Privacy concerns: The developers are not transparent possibilities. Bias in the application is hard to discern as the about the data collected from the user. Furthermore, they developers use their language model and do not share the do not allow for the deletion of user data when they are nature of it. Large language models’ most crucial challenge finished using the application. is the mitigation of bias. The nature of the bias depends on When listing the technology’s applications, it is apparent the data it is trained on, and since the training data is not that its current state has more negative than positive applica- public information, the language model is most likely biased tions. The following section will look at the results so far in some aspects. and give an objective view of the technology through the 5) Explicability: In this case, a problem closely related to perspective of normative ethics. the justice principle is the transparency of technology. The developers only shared that it is ”100% artificial intelligence” D. Perspective through Normative Ethics in a system that combines a scripted dialogue system with its LLM. Their About section also mentioned that they used an This section will apply the three normative ethics estab- OpenAI model. Ultimately, the technology is poorly explained lished in the methods section: consequentialism, deontology, or transparent about its inner workings. and virtue ethics. The aim is to identify the strengths the theo- The application’s initial overview has been achieved by ries would find in the technology and the potential weaknesses identifying how the technology upholds the five principles. they encompass. 1) Consquetnialsim: The technology behind it must prior- Give the users control over their data by allowing them itize the user’s overall well-being to be morally right. Ascer- to control their data and privacy through settings. taining this means providing features that can be demonstrated To promote fairness, clear guidelines for AI development to improve the users’ social interactions and mood. The should be introduced. amount of improvement of the user must also justify the lack Lastly, the application’s overall transparency should be of transparency. improved. Lack of transparency degrades other values, 2) Deontogy: A requirement could be that the technology as their fulfilment becomes unclear. behind the application follows clear, established rules for Looking at Replika through a value-sensitive design has interacting with the user. The rules could aim to maintain identified retrospective potential risks and benefits. By fol- healthy social interaction while avoiding harmful topics. They lowing the compiled recommendations, the technology could could also include clear rules about how the application is improve essential values. The potential improvement would used, such as limiting time. The morality of the technology also positively affect the results of the normative ethics per- can then be evaluated by how it follows these rules. spectives from the earlier section. 3) Virtue Ethics: The technology should focus on pro- moting virtues like truthfulness, modesty and patience to be IV. D ISCUSSION morally good. The virtues can be achieved by designing the AI The result section has utilised various methodologies to pro- interactions to remain truthful and not provide misinformation vide an overview of Replika from different perspectives. The while remaining modest and patient during user interactions. section started by providing an overview of the application, E. Value-Sensitive Design: a retrospective highlighting its features and structure. The gathered informa- tion was then evaluated based on how well it fulfilled the five The standard use of VSD is throughout the whole process principles of AI in society. The evaluation was a starting point of developing a technology; however, it can also be used to for delving deeper into the ethical implications by looking at retrospectively identify the values present in the current state the technology as a dual-use case. The squired information was of the technology. This section analyses the values of the case then evaluated from the perspectives of the different normative study by considering the values it fulfils and how well it fulfils ethics. Finally, the ending is by retrospectively looking at the them based on the information gathered in previous sections. case study through the lens of VSD, identifying its values and Based on the previous sections, the primary values that the possible actions to improve them. This section will discuss the case study of Replika as an AI solution relates to are Human results with the inclusion of more context. well-being, safety, privacy, fairness, and transparency. 1) Analysis: A. Thoughts about the results Human well-being: Through its various features, the The methodology throughout the compilation of the results technology can be a tool for providing emotional support section was straightforward. However, it was hard to maintain and general betterment of mental health. A possible side an objective relation to the topic. The choice of methodology effect of these features is that the user could develop was also tricky, as the aim was to provide a concise and unhealthy attachments using Replika as a substitute for well-rounded ethical overview of the technology. Whether this human connection. was achieved is hard to say, as the result provides ethical Safety: Safety is paramount when dealing with human implications from many directions and lacks continuity. The well-being in the context of mental health. The users must primary idea was to provide an overview of the technology feel safe using the technology and not be manipulated without ethics and then gradually insert ethics by applying or misinformed. The possibility of misinformation and the methods. As the topic is AI, the five principles of AI in manipulation in the application should be examined. society were first used to examine how AI fulfils them. Then, Privacy: Replika admits to saving user data but does not with the context of the principles, the dua-use capabilities of allow its deletion or disclose how it is used. the technology were procured. At this point, the goal was for Fairness: Possible bias exists in Replika interactions. the normative ethics section to use the principles and dual- This topic needs to be explored to mitigate AI discrimi- use information to be able to provide an objective view of the natory behaviour. technology. The part that was most difficult to establish was Transparency: It is unclear how interactions are gener- VSD. Whether or not to include this section was challenging ated and how the resulting data is used in the techniques because it felt unnecessary. VSD is a process that should be and data used for Replika. part of the whole project, but in this case, it was used to look 2) Recommendations: back on something already made. It provides an excellent way Asses the actual impact of Replika on the user’s mental to identify values in the final product, but that could have been health by conducting studies. If there is no evidence of done without the method. Sadly, it makes it look like it was mental health improvement, it should not be advertised included just to be included. Ultimately, the methodologies as such. provide an excellent process of gradually applying ethics to Be transparent and implement safeguards regarding mis- technology. However, as some end up sharing results, they can information and manipulations. feel redundant. B. Companion AI and loneliness) E. Ethical implications of the technology Amid the recent COVID-19 epidemic, loneliness is a sig- Based on the information elicited, the ethical implications nificant problem worldwide. Now, a year after its official tend to be negative. This was really apparent when putting end, loneliness is still a growing problem to the point of it into the concept of the five principles in AI. While the being called the loneliness epidemic. A study by Gallup application has some benefits and some non-maleficence, the of loneliness across 142 countries highlights that younger autonomy and justice it asserts are questionable. This is also adults(19-29) generally feel more lonely. In the current a byproduct of the lack of explicability, as the solution is a technological environment where chatbots are preferred, it is black box. The lack of fulfilment of the principles raises the not strange that one of the ways to deal with loneliness is to go question of the application’s morality and general use. When into conversations with the chatbot. When talking about looking at the possible dual use of the application, most good AI companions, it is primarily chatbots aimed at mimicking applications are assumptions of possible gains from using the social interaction. These chatbots are at a level where most app. In the negative aspects of the use, the transparency of it people cannot differentiate between AI and real people; in is again disclosed, which adds to the potential use of the user an experiment by AI21Labs created a game that acted as a data as the users do not know how the data is used. Turing test where the users were tasked to identify AI through Another important point is the misinformation point. By conversation. When talking to the AI, 40% of users using an in-house language model, they control the interactions guessed wrong, indicating that it is difficult to determine the with the user, which is fine. However, not disclosing how and person on the other side of the screen with the current state what drives this process allows them to adjust to whatever of AI. If the user cannot tell who or what they are talking they want, nudging the user however they want. With these with is real, it may not matter as long as it can provide them implications in mind, the technology was evaluated through some good. In the result section, Replika’s demographic and the views of normative ethics. This was particularly hard user base were presented. Most of Replika’s users are between to do because of the lack of knowledge. When looking at (18-34), overlapping with the ages most prone to loneliness the technology through the eyes of a consequentialist, the (19-29). Based on how Replika is advertised as a partner technology would not be suitable as the negative repercussions and emotional support, it is likely exploiting the loneliness outweigh the overall good it provides. A deontologist would and, at the same time, worsening the problem. also not approve as there are no clearly established rules for the application to follow. Finally, in terms of virtue ethics, C. A sparse case study it is hard to ascertain the virtues promoted by application, While acquiring information for the results and utilising as mental health could be part of multiple virtues. In the the methodologies, it became apparent that more information case study, courage and self-awareness could be virtues it about Replika’s inner workings was needed. It made it hard to upholds but also diminishes based on their use. More research objectively discern the technology’s implications without using is required to provide a clear answer for the morality of this speculation. Most values concerning some aspect of safety concern for human beings, and there is little to no information or security depend on the transparency of the technology. about the effects of mental support via AI. While the case study was fascinating, observing its ethical If misused, the technology poses a threat to the user. If implications from a more technical perspective would have the user develops an unhealthy connection to AI and uses been nice. The assumptions made when information is lacking it to replace social connections, their mental well-being will can end up making the application overview superficial, as it probably be worse. Young people might not be equipped with is only judged by what is available from the user interface. the mentality to approach this application in a healthy manner and end up being emotionally addicted. The appeal of a ’person’ who is always available is understandable, especially D. The emotional echo-chamber if a user is in a situation where no one is available. The The inner workings are not classified, so the data on which tradeoff, however, might not be worth it if it makes the they are trained is not certain. Since they use their own LLM, situation worse in the future. On top of it being used as they may use their user data for the training. Additionally, mental relief, it is also advertised as a romantic partner. It is the process of eliminating bias and clearing the data is not tough to find the overall good achieved by simulating romantic disclosed. This allows for the possibility that the user’s views partnerships and mild erotic conversations. Moreover, using and opinions can be returned, indirectly reinforcing them. If resources to generate images for these conversations is a waste the user is in a bad mental state, the AI should hopefully of resources that ultimately harms the environment. Outside not worsen it or take them down a radical view rabbit hole. of helping people with social anxiety practice interactions, it The AI caters to the user by pleasing them as much as is complex to argue for the technology, mainly due to its possible; humans tend to like being told they are correct, and lack of transparency. The technology is still new, and the if the AI agrees with everything the user says, it could have repercussions have yet to be extensively researched. How the harmful consequences. The users’ self-perception could also technology is portrayed gives the impression that it is not a be affected if their reality is whatever the AI tells them. tool for mental betterment and, thus, is unethical. F. How can it be redeemed? Jeroen van den Hoven. “Value Sensitive Design and In the best-case scenario, the technology would have been Responsible Innovation”. In: Responsible Innovation. ethically neutral. However, this becomes impossible when John Wiley Sons, Ltd, 2013. Chap. 4, pp. 75–83. it doesn’t serve a single purpose. To mitigate this, the ISBN : 9781118551424. DOI : https://doi.org/10.1002/ software must focus on the values it wishes to improve. If 9781118551424.ch4. eprint: https://onlinelibrary.wiley. the technology had used VSD, it may have discovered the com / doi / pdf / 10. 1002 / 9781118551424. ch4. URL: caveats found in this report. The main concern is the need https : / / onlinelibrary. wiley. com / doi / abs / 10. 1002 / for more transparency, which hinders the evaluation of other 9781118551424.ch4. values. Therefore, improving transparency may better address Luciano Floridi and Josh Cowls. “A Unified Framework other values. Outside user statements, there is no proof that the of Five Principles for AI in Society”. In: Harvard Data application improves human well-being; conducting a study Science Review (June 2019). DOI: 10.1162/99608f92. could help the application improve this. When using an LLM, 8cd550d1. the question of fairness is also important; while it is acceptable Larry Alexander and Michael Moore. “Deontological to use their model, the process of training should disclosed Ethics”. In: The Stanford Encyclopedia of Philosophy. to ensure the users that bias is minimal. This is especially Ed. by Edward N. Zalta. Winter 2021. Metaphysics important considering that the user base is 73% men. The Research Lab, Stanford University, 2021. norms and values needed should be developed in cooperation European Commission. Proposal for a Regulation of the with professionals in the field of psychology. A better solution European Parliament and of the Council Laying Down might be to create norms and identify additional values. Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legisla- V. C ONCLUSION tive Acts. COM Document COM(2021) 206 final. Pro- This report conducts an ethical analysis of the case study posal for a regulation on artificial intelligence. European of Replika. It is done by outlining the technology and then Commission, 2021. applying different methodologies. It starts by evaluating it The Editors of Encyclopaedia Britannica. Normative using the five principles of AI in society and then ascer- ethics. June 8, 2023. URL: https://www.britannica.com/ taining the possible dual use. Executing those allows further topic/normative-ethics. evaluation through the different perspectives of normative Sydney Morning Herald. AI Company Restores Erotic ethics and ends up with a retrospective value-sensitive design. Roleplay After Backlash from Users ’Married’ to Their The results showed a lack of transparency, making it hard Bots. 2023. URL: https://www.smh.com.au/world/north- to determine whether it was ethically sound. Furthermore, america / ai - company - restores - erotic - roleplay - after - the values and virtues addressed by the application did not backlash-from-users-married-to-their-bots-20230326- outweigh the harmful elements, resulting in the categorisation p5cvao.html (visited on 05/30/2024). of the technology as unethical. Rosalind Hursthouse and Glen Pettigrove. “Virtue Ethics”. In: The Stanford Encyclopedia of Philosophy. VI. F UTURE WORK Ed. by Edward N. Zalta and Uri Nodelman. Fall 2023. This topic has much potential for further discussion. This Metaphysics Research Lab, Stanford University, 2023. report removed some topics due to a lack of time and space. Walter Sinnott-Armstrong. “Consequentialism”. In: The Evaluating the technology based on the proposed AI regula- Stanford Encyclopedia of Philosophy. Ed. by Edward tions by the European Union could also be interesting. If N. Zalta and Uri Nodelman. Winter 2023. Metaphysics I were to continue the work, I would like to delve into the Research Lab, Stanford University, 2023. topic of interfacing with technology (perhaps in the context Gallup. Almost a Quarter of the World Feels Lonely. of intimacy). It could also be interesting to look more into 2024. URL: https : / / news. gallup. com / opinion / gallup / self-perception and actualisation through technology and how 512618/almost-quarter-world-feels-lonely.aspx (visited AI can play a part in this. on 05/15/2024). Iubenda. Garante: Replika in Breach of EU Data Pro- R EFERENCES tection Regulation. 2024. URL: https://www.iubenda. Ronald Atlas and Malcolm Dando. “The Dual-Use com / blog / garante - replika - in - breach - of - eu - data - Dilemma for the Life Sciences: Perspectives, Conun- protection-regulation/ (visited on 05/30/2024). drums, and Global Solutions”. In: Biosecurity and AI21 Labs. Human or Not Results. 2024. URL: https: bioterrorism : biodefense strategy, practice, and science //www.ai21.com/blog/human-or-not-results (visited on 4 (Feb. 2006), pp. 276–86. DOI: 10.1089/bsp.2006.4. 05/20/2024). 276. Replika. How does Replika work? Accessed: 2024-05- Peter-Paul Verbeek. “Design ethics and the morality 30. 2024. URL: https : / / help. replika. com / hc / en - us / of technological artifacts”. In: Philosophy and Design: articles/4410750221965-How-does-Replika-work. From Engineering to Architecture. Ed. by P. E. Vermaas Replika. Replika: My AI Friend. Accessed: 2024-05-30. and alii. 2007. 2024. URL: https://replika.com. SimilarWeb. Replika.ai - Website Traffic and Demo- graphics. 2024. URL: https : / / www. similarweb. com / website / replika. ai / #demographics (visited on 05/30/2024). Tidio. Chatbot Statistics You Need to Know in 2024. 2024. URL: https : / / www. tidio. com / blog / chatbot - statistics/ (visited on 05/30/2024). Obscure Nerd VR. The Replika AI Girlfriend App has Gotten MORE Insane. Youtube. 2024. URL: https : / / youtu.be/SJS3tU9X7Gs?si=MxZeKHbT8VxK5ESA.