IT Law Group 1 Past Paper PDF
Document Details
Uploaded by Deleted User
Università Ca' Foscari Venezia
Tags
Summary
This document is a past paper for a course on IT law, focusing on the legal challenges of artificial intelligence. It covers key issues such as accountability, transparency, data privacy, and ethics in the context of AI. It also explores historical developments in IT law.
Full Transcript
FUNDAMENTALS OF IT LAW [ET7004] ARTIFICIAL INTELLIGENCE GROUP 1 Abigail Agyemang, Hillary Esi Akoun, Andrea Amaduzzi, Dayana Amanzhol, Pelinsu Atamer, Irem Aydiner, Loredana Bargan, Francesco Bellabona, Alessan...
FUNDAMENTALS OF IT LAW [ET7004] ARTIFICIAL INTELLIGENCE GROUP 1 Abigail Agyemang, Hillary Esi Akoun, Andrea Amaduzzi, Dayana Amanzhol, Pelinsu Atamer, Irem Aydiner, Loredana Bargan, Francesco Bellabona, Alessandro Bianco, Eleonora Bonfiglioli, Alberto Boscariol. Università Ca’ Foscari Venezia - BSc Digital Management 1 Abstract AI is bringing up a lot of legal challenges, like accountability, privacy, and ethics. Different countries are handling these issues in various ways. The EU focuses on strict privacy rules, the U.S. relies more on the market, and China has strong government control. Privacy is a major concern since AI uses personal data, which raises questions about consent and protection. Introduction AI helps machines think, learn, and make decisions, and it's changing industries like healthcare, finance, and transportation. But with these changes come big legal and ethical questions. Key issues include: - Accountability: Who’s to blame when AI goes wrong? - Transparency: How do we explain how AI makes its decisions? - Privacy: How do we keep our data safe? - Ethics: How do we stop AI from being biassed? Different countries handle these challenges in their own way. The EU focuses heavily on privacy and risk management, the U.S. prefers a more market-driven approach, and China keeps things under control with heavy government oversight. Even though the approaches are different, the goal is the same: protecting people while encouraging innovation. Historical Context and Development of IT Law AI and IT law have developed significantly since the 1970s with the rise of computers and the Internet. Early laws focused on data security, privacy, and intellectual property, such as the U.S. Computer Fraud and Abuse Act of 1986. The 1990s and 2000s saw internet expansion and the emergence of e-commerce, leading to regulations on online transactions, digital rights, and privacy, like the EU Data Protection Directive (1995) and HIPAA (1996). The rise of AI in the 2010s brought new challenges, including algorithmic bias and transparency. GDPR (2018) addressed AI-related data privacy concerns. The 2020s shifted focus to AI ethics, risk management, and accountability, with frameworks like the EU’s AI Act (2021) targeting high-risk applications. Challenges such as liability for AI harm, intellectual property issues, and AI in surveillance remain unresolved. Looking ahead, AI and IT law will likely emphasise global standards for safety and ethics, balancing innovation with accountability and human rights. Università Ca’ Foscari Venezia - BSc Digital Management 2 The Ethical Implications of Artificial Intelligence As AI technologies expand across sectors, addressing ethical concerns is crucial to prevent risks such as discrimination and privacy violations. Core ethical principles in AI development include fairness, transparency, and accountability. AI systems must avoid discrimination and address biases in training data. In sensitive areas like hiring and criminal justice, ensuring algorithms are explainable and transparent is essential to promote accountability. Bias in historical data can perpetuate social inequalities. For example, in the 1980s UK, biassed medical data led to discriminatory outcomes. Effective pre-deployment testing and independent audits are critical to mitigating these risks. The ethical implications of AI vary across regions due to cultural and social contexts. In low-income countries, the lack of regulations increases the risk of exacerbating inequalities. Inclusive governance and ethnographic research are essential to develop fair solutions. Key IT Law issues with Artificial Intelligence Artificial Intelligence (AI), because of how it intrinsically works, is developing as an enormous challenge for IT Law. Herein lie the greatest problems: - Data Collection and Privacy: the fact that AI applications require massive volumes of data automatically ignites concern over the need for consent and security of users' data, coupled with conformance regulations such as GDPR (General Data Regulation Protection). It is therefore very important to ensure that the rules concerning data protection in AI applications are observed, to maintain user trust. - Intellectual Property (IP) Rights: how AI-created content is generated and by whose prompt challenges conventional IP frameworks. It involves a reconsideration of current IP rights that would ensure the peculiarities of AI creation in regard to ownership and protection. - Liability and Accountability: hard to ascribe in relation to the decisions made by the autonomous systems. Therefore, vivid liability frameworks need to be in place to redress potential harms from the applications of AI. - Discrimination and Fairness: AI systems can reflect the biases of the data on which they are trained, which could result in unfairness. Ensuring that the process is fair while minimising bias will be at the very core of the concerns, from both a legal and ethical point of view. - Transparency and Explainability: a part of the AI models is a "black box" property, which is very hard to understand. More and more legal frameworks are evolving to make AI systems transparent and explainable in order to improve accountability and trustworthiness. Addressing these issues is vital for the responsible development and deployment of AI technologies within the legal landscape. Università Ca’ Foscari Venezia - BSc Digital Management 3 Differences between AI Law in different parts of the world European Union AI Act The European Union wants artificial intelligence regulated to ensure that the development and the use of this technology are safe and responsible. The AI Act ensures that AI systems follow all EU rules and values concerning human rights, security, privacy, transparency, non-discrimination, and social and environmental well-being. Objectives and Core Principles The AI Act balances technological innovation with the need for protection of individuals and society. Other important pillars of this regulation include transparency and accountability: AI systems should leave trails throughout their lifecycle. This is based on a risk-based approach, in which it categorises various AI systems by the potential harm they pose. AI System Classification: Four Risk Levels - Unacceptable risk: such systems are banned, which pose a direct threat to safety or fundamental rights. For example, this covers those systems applied for manipulating human behaviour or for social scoring. - High risk: AI systems used in transport, education, and employment need to undergo a conformity assessment with the aim of ensuring safety. The European Commission will hold a register that is publicly accessible. - Limited Risk: transparency measures, for example making users aware when they are dealing with AI, such as chatbots, will be necessary for informed consent. - Minimum Risk: applications in this category include anti-spam filters and recommendation systems, which also have wide usage but are not strictly regulated and should conform to EU data protection laws. Responsibilities and Transparency Conformity with the AI Act rests with AI providers, but other actors, like distributors and users, bear some responsibility, too. Models of generative AI, of which ChatGPT is an example, will be subject to particular transparency requirements, while high-risk ones, such as GPT-4, to a more severe regime. Università Ca’ Foscari Venezia - BSc Digital Management 4 Asia China - Government-Led Strategy: the country has a national AI development plan with the aim of becoming a global leader in AI by 2030. The plan emphasises technological development and regulation. - Regulatory Measures: China introduced a few regulatory measures to control the influence of AI, especially in social media, surveillance, and facial recognition. Japan - Collaborative and Ethical Approach: Japan promotes AI development with a focus on ethical considerations. The government works closely with industry and academia to ensure that AI is used in beneficial ways. - AI Development Guidelines: in 2019, Japan's Ministry of Internal Affairs and Communications released "AI Utilisation Guidelines," which established principles for transparency, human-centricity, and data privacy. India - NITI Aayog AI Strategy: this Indian government agency released a national AI strategy in 2018 with a focus on the areas of social development inclusive of healthcare, agriculture, education, and smart cities. - Pending AI-Specific Legislation: while the data protection laws in India under the proposed Personal Data Protection Bill are pretty strict, specific AI regulation is still in the works. Regional and International Cooperation Many Asian countries have participated in several global initiatives toward ethical AI, including those under the OECD AI Principles and GPAI (Global Partnership on AI). They collaborate to develop common data privacy, ethical, and safety standards for AI development. Università Ca’ Foscari Venezia - BSc Digital Management 5 USA Artificial Intelligence is used in so many areas, such as health, transportation, banking and others, but there is no federal law regulating it fully and specifically in the United States. However, various measures and initiatives have been taken, such as the following. The National AI Initiative Act (2020) It is among the primary legislation in the United States that provides guiding policy on how the federal government should handle artificial intelligence. The proposal was made by Congress and subsequently signed by President Donald Trump on 1st January 2021. More precisely, the law tries to: - Promote AI research and development in the United States. - Ensure U.S. technological leadership in AI relative to other countries. - Coordinate efforts among federal agencies, universities, and the private sector to stimulate AI innovation and application. - Address ethical issues regarding the use of AI, including safety, privacy, and protection from algorithmic bias. AI Risk Management Framework (2021) The National Institute of Standards and Technology is the United States federal agency that develops standards and guidelines on safety and quality for a large number of different industries, including technology. They have developed an "AI Risk Management Framework" that provides guidance on how organisations could manage AI-related risks. This is not a law but a voluntary guide to help businesses and governments adopt positive conduct about the use of AI. Blueprint for an AI Bill of Rights (2022) In 2022, the Biden Administration introduced the "Blueprint for an AI Bill of Rights", which is not a law but a guide to protect citizens' rights and make sure AI is used decently and responsibly. It sets guidelines on items like: - Protection against automated discrimination: underlines no discrimination against people as a result of decisions made by algorithms and that companies and governments should be transparent about how those systems work. - User Transparency and Understanding: users must know exactly how their data contributes to AI-enabled applications, they should understand how such data is being collected, what is done with it, and why. - Privacy and Security Protection: since AI collects huge amounts of sensitive data, it is essential to protect them and ensure that people have control over their data. AI Accountability Act (2023) This act was proposed in 2023 and, if approved, it would make companies use artificial intelligence in a more transparent and accountable way due to how their algorithms work. If passed, it is going to be the first federal law that regulates a part of AI. Università Ca’ Foscari Venezia - BSc Digital Management 6 Case Studies Related to AI in IT Law AI is reshaping IT law, and several high-profile cases illustrate its impact on issues such as intellectual property, privacy, and tech companies' responsibilities. The following are some of the case studies: - Intellectual Property Disputes with Generative AI Platforms: OpenAI, Meta, and Microsoft have been sued, claiming their business practices of training models on copyrighted data amount to infringement. - Privacy and Data Use: Privacy laws are often involved in AI cases, such as Elon Musk's X, formerly Twitter, which allegedly used user data for training AI models without proper consent. - Predictive Analytics in Legal Workflows: AI tools, such as Reveal-Brainspace, automate legal tasks; in one case, document review was done by up to 85%—showing their effectiveness in complicated workflow processes. On the other hand, there are several real-world cases, which illustrate the impact of AI on IT law, for example: - Authors vs. OpenAI and Meta (2023): Authors Michael Chabon and Sarah Silverman are suing OpenAI and Meta for allegedly using their books without permission to train AI models, citing intellectual property infringement. - GitHub Copilot Lawsuit (2022): GitHub has sued Microsoft over Copilot, alleging that it illicitly reproduces code without attribution or regard for open-source licences. The lawsuit raises legal and AI compliance issues around software development. - X and Data Privacy Violations (2024): The Austrian-based NGO NOYB has filed a complaint against company X for using personal data to train AI models without consent, which is against the EU GDPR. Such cases bring out the challenges and opportunities in integrating AI into industries bound by strict legal and ethical standards. The outcome of such lawsuits and applications shall be important in helping shape the still-evolving face of IT law about AI. Università Ca’ Foscari Venezia - BSc Digital Management 7 Conclusion AI has pushed IT law into new areas, making us think about more than just protecting data. It has raised big questions about fairness, privacy, and how these technologies impact our lives. We are hearing more and more calls for laws that make sure AI is fair, transparent, and trustworthy. Different countries are handling this in their own ways—like the EU focusing on privacy, while the U.S. and China each have their own priorities. A major concern is privacy since AI systems rely on so much personal data, raising questions about accountability and our rights. You can see these issues playing out already in real-life examples like facial recognition and self-driving cars. As AI keeps advancing, the law has to keep up, making sure that technology keeps moving forward but also that it’s being used in a way that protects people and serves the greater good. References https://www.ai4business.it/intelligenza-artificiale/regolamentazione-negli-usa-ancora-incerte-l e-prospettive-di-una-legislazione-ampia-sullai/#:~ =Merita%20una%20particolare%20menzione%20il,presso%20le%20agenzie%20scientifiche %20federali. https://www.biodiritto.org/AI-Legal-Atlas/AI-Normativa/Stati-Uniti-d-America-Proposta-di-legg e-relativa-all-Algorithmic-Accountability-Act-2023-obbligo-di-analisi-d-impatto-in-relazione-a- sistemi-decisionali-automatizzati-e-processi-decisionali-critici-aumentati. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulatio n-on-artificial-intelligence https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guid ance-on-ai-and-data-protection/. https://www.gov.uk/government/consultations/artificial-intelligence-and-ip-copyright-and-pate nts/outcome/artificial-intelligence-and-intellectual-property-copyright-and-patents-governmen t-response-to-consultation. https://www.intesa.it/eu-ai-act-cose-e-cosa-prevede/ https://doi.org/10.48550/arXiv.2401.07348. https://www.whitehouse.gov/ostp/ai-bill-of-rights/. https://oecd.ai/en/wonk/the-ai-data-challenge-how-do-we-protect-privacy-and-other-fundame ntal-rights-in-an-ai-driven-world. https://www.uspto.gov/sites/default/files/documents/National-Artificial-Intelligence-Initiative-O verview.pdf. Università Ca’ Foscari Venezia - BSc Digital Management 8