Ethical Audit of CantastorIA AI-Powered Audio Content PDF
Document Details
Uploaded by HonoredDulcimer
University of Pisa
Christian Di Maio, Cristian Cosci, Emanuele Fulvio Perri, Luca Dini
Tags
Summary
This document presents a comprehensive ethical audit of the CantastorIA project, a university-led initiative using AI to generate audio content. The audit assesses trustworthiness, transparency, and responsible deployment, focusing on compliance with privacy regulations and social implications. The study also highlights accessibility for vulnerable populations and environmental sustainability.
Full Transcript
1 IECS DOCTORAL SCHOOL – UNITRENTO | AI ETHICS TODAY (J. BRUSSEAU), MAY 30th 2024 Ethical Audit and Commentary of CantastorIA—an AI-Powered Audio Content Generation Service: Evaluating Trustworthiness, Transparency, and Responsible Usage...
1 IECS DOCTORAL SCHOOL – UNITRENTO | AI ETHICS TODAY (J. BRUSSEAU), MAY 30th 2024 Ethical Audit and Commentary of CantastorIA—an AI-Powered Audio Content Generation Service: Evaluating Trustworthiness, Transparency, and Responsible Usage Considerations Christian Di Maio†, Cristian Cosci†, Emanuele Fulvio Perri†, Luca Dini† PhD in AI for Society, University of Pisa † Equal contribution Abstract—This study presents a comprehensive for users with impairments or ADHD, as well as its ethical audit of the CantastorIA project - a university-led environmental sustainability in alignment with the initiative that leverages various AI models to generate UN's 2030 Agenda. Certifications like TÜV SÜD are content. The audit examines the system from multiple referenced to benchmark performance. Additionally, ethical perspectives to assess its trustworthiness, the audit examines matters of human dignity and transparency, and responsible deployment. Firstly, the equity - ensuring the AI-generated content respects assessment evaluates CantastorIA’s compliance with key individual worth and avoids harmful biases. Finally, it privacy and security regulations, such as the EU's considers the system's impact on user autonomy, General Data Protection Regulation (GDPR). It also evaluating the appropriate balance between AI applies the EU Commission's risk-based approach to assistance and human agency. Overall, this ethical assess potential data usage and repository security assessment provides a multifaceted evaluation of the concerns. Transparency around data handling practices CantastorIA project, offering guidance to university is a central focus. The study then explores the social stakeholders on strengthening the system's implications of CantastorIA, including its accessibility for trustworthiness and responsible development. users with impairments or ADHD, as well as its INTRODUCTION carbon footprint of the CantastorIA infrastructure, as well as its alignment with sustainability T his ethical audit of the CantastorIA project was guided by a set of core ethical principles that are widely recognized as fundamental to the frameworks. Ultimately, this assessment aimed to provide stakeholders with a comprehensive evaluation of the CantastorIA project, highlighting responsible development and deployment of AI areas of strength as well as opportunities for systems. These include respect for human rights improvement to ensure the system's and individual dignity, fairness and non- trustworthiness, responsible development, and discrimination, transparency and accountability, positive societal impact. privacy and data protection, safety and security, For the future of the CantastorIA project, the plan and environmental and social sustainability. The is to create an interdisciplinary team composed of a audit delved deeper into specific ethical diverse range of skilled experts. This team will considerations, posing research questions around include specialists from various relevant fields, the system's potential biases and their mitigation, such as: (a) AI developers and engineers with the inclusivity and accessibility of the platform for expertise in the technical aspects of the vulnerable populations, the robustness of the CantastorIA system; (b) Software developers, governance and oversight mechanisms, and the particularly those with experience in Python and broader societal impact of the AI-generated other programming languages used in the project; content. Particular focus was placed on (c) Ethicists, both academic and applied, to provide understanding the balance between human agency guidance on the moral and philosophical and AI assistance, as well as the transparency and consideration; (d) Legal experts versed in relevant explainability of the underlying models. regulations, data protection laws, and intellectual Additionally, the audit examined environmental property issues; (e) User experience (UX) and user factors, assessing the energy consumption and 2 IECS DOCTORAL SCHOOL – UNITRENTO | AI ETHICS TODAY (J. BRUSSEAU), MAY 30th 2024 interface (UI) designers to evaluate the human- The service will prioritize the social welfare of its centered aspects of the system. The selection of this users and the community at large in ways that are interdisciplinary team will be critical, as the quality for now only conjectural. In providing a complete and comprehensiveness of the ethical audit will techno-media listening experience-a soundtrack depend on the diversity of expertise and the automatically tuned to the text and reading absence of biases or conflicts of interest among the intonation-the service shows insights of great value participants. Special considerations will be given to in improving concentration and enhancing the ensuring a balanced representation of stakeholders impact hitherto offered by audiobooks. The service and a thorough understanding of the potential can, in addition, be helpful for individuals with impacts of the CantastorIA system. ADHD, low vision and other conditions. Granted a degree of human oversight in the I. ABOUT PRIVACY, SECURITY AND HUMAN- formation of the dataset, the developers are OVERSIGHT IN THE CANTASTORIA PROJECT. committed to averting, as far as possible, the The service will ensure the privacy of user data- generation of content that could be harmful, including any text entered in search strings, audio manipulative, or detrimental to individual or social recordings, and personal information. Any data well-being. The system must actively work to deliberately granted by the user to the platform will support and enhance the mental, emotional and be treated in accordance with GDPR (General Data intellectual well-being of its users. Protection Regulation) and will not be maintained In terms of sustainability, developers set goals on any server unless explicitly consented by the designed to acquire certifications in the areas of user. The service, being centered on the application quality, energy, environment, safety and product, of an AI-based model that operates audio aiming to comply as much as possible with TÜV manipulations, is inscribed in the category of SÜD testing standards and the UN Sustainable “limited risk systems with transparency Development Goals (Agenda 2030). obligations” proposed in the risk-based approach described in the European AI Act regulatory III. DIGNITY, EQUITY, AUTONOMY framework; consequently, the following will be Developers shall promote content that upholds and ensured: (a) an acceptable level of explainability, respects the inherent dignity of the individuals with a clear representation of the processes behind possibly mentioned, and uphold a common the outputs; (b) algorithmic transparency, through commitment to avert the perpetration of harmful accessible descriptions of the behaviors of the stereotyping, discrimination, or degradation of any algorithms employed; (c) data usage, i.e., the person or group. purposes of any data collection that the user The service, however, while holding as its chooses to share with developers; and (d) risks, cornerstone an ethical principle of fairness- finally, of possible misuse by the user, or promoting and generating inclusive, representative, unacceptable, quali-quantitative processing and unbiased content, ensuring decision- proposed by the service. In accordance with the making/elaboration processes without European Commission's guidelines on contexts and discrimination based on characteristics such as implementations of ethics in artificial intelligence, race, gender, age, or socioeconomic status-by developers are committed to ensuring acceptable proposing listening to pre-existing works, does not levels of security-according to the principle of exclude (but neither does it incentivize) any bias or technical robustness. Such compliance will be representations that are socially unacceptable but limited to the infrastructure (in the case of a are nevertheless faithful reproductions of the potential application), and to the server side (in the work's content and authorial idea. case of any repository for software management and deployment). In any case, human supervision IV. FURTHER STEPS TO TAKE INTO ACCOUNT. will be fully ensured limited only to the development and deployment phases of the service; A. EVALUATION OF ALGORITHMIC BIAS AND FAIRNESS. past the two phases, supervision will be the (1) Assessing the AI models underlying responsibility of the user alone. CantastorIA for potential biases, especially around demographic factors, and ensuring fairness in the II. SOCIAL WELLBEING, ACCESSIBILITY, generated content; (2) examining the processes in SUSTAINABILITY: POTENTIAL OF A TOTALIZING place to monitor and mitigate algorithmic bias. EXPERIENCE, ENVIRONMENTALLY FRIENDLY AND FOR ALL. 3 IECS DOCTORAL SCHOOL – UNITRENTO | AI ETHICS TODAY (J. BRUSSEAU), MAY 30th 2024 B. IMPACT ASSESSMENT ON PEOPLE AND MEDIA. (1) Analyzing the specific needs and challenges faced by vulnerable or marginalized groups in accessing and using CantastorIA; (2) ensuring equitable access and usability for diverse users, including those with limited digital literacy or technology access; (3) considering the wider societal implications of CantastorIA, such as potential effects on information ecosystems, public discourse, and the labor market; (4) examining how CantastorIA aligns with broader ethical principles and societal values. ESSENTIAL SITOGRAPHY/BIBLIOGRAPHY AI Ethics Today Syllabus. URL = https://trento.ai.ethicsworkshop.org/ European Commission, AI High Level Expert Group. Ethics Guidelines for Trustworthy AI URL = https://ec.europa.eu/digial-single-market/en/news/ethics- guidelines-trustworthy-ai European Parliament. Artificial intelligence act. COM(2021)206 21.4.2021. 2021/0106(COD). PE 698.792 – March 2024. URL = https://www.europarl.europa.eu/RegData/etudes/BRIE/20 21/698792/EPRS_BRI(2021)698792_EN.pdf European Parliament. EU guidelines on ethics in artificial intelligence: context and implementationt. PE 640.16. URL = https://www.europarl.europa.eu/RegData/etudes/BRIE/20 19/640163/EPRS_BRI(2019)640163_EN.pdf European Parliament. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. 2021/0106(COD). 5662/24. Brussels, 26 January 2024. URL = https://data.consilium.europa.eu/doc/document/ST-5662- 2024-INIT/en/pdf ONU. Obiettivi di sviluppo sostenibile. URL = https://unric.org/it/agenda-2030/