Big Data & Artificial Intelligence - European Law - 2023-2024

Summary

This document is a synthesis of the lecture notes on European law, big data, and artificial intelligence, focusing on their applications and advantages, disadvantages such as human error elimination and 24/7 availability and cost of implementation, and the different types of AI (assisted, augmented, and autonomous). It also examines ethical and economic considerations and the need for potential regulation of AI.

Full Transcript

Garance Lamand M2 2023-2024 European law, (big) data and artificial intelligence applications Lecture Ex cathedra: no research paper this year. Examen: Oral Materials: Structure of the lecture availa...

Garance Lamand M2 2023-2024 European law, (big) data and artificial intelligence applications Lecture Ex cathedra: no research paper this year. Examen: Oral Materials: Structure of the lecture available online + Sources to read (mandatory) + Pwp. Introduction: Discussion about the good and the bad of AI: AI can be used as tools and it can be use in bad or in good way. Advantages of AI: Everyone knows that AI gives businesses an edge. The Appen State of AI Report for 2021 says that all businesses have a critical need to adopt AI in their models or risk being left behind. Companies increasingly utilize AI to streamline their internal processes (as well as some customer-facing processes and applications). Implementing AI can help your business achieve its results faster and with more precision.  Eliminates human error and risk  24/7 availability (Ex. System of quick reply on bank application)  Repetitive jobs  Cost reduction  Data acquisition and analysis: When it comes to processing data, the scale of data generated far exceeds the human capacity to understand and analyze it. AI algorithms can help process higher volumes of complex data, making it usable for analysis. Disadvantages of artificial intelligence With all the advantages listed above, it can seem like a no-brainer to adopt AI for your business immediately. But it’s also prudent to carefully consider the potential disadvantages of making such a drastic change. Adopting AI has a myriad of benefits, but the disadvantages include things like the cost of implementation and degradation over time.  Costly implementation: The development of AI can be extremely costly.  Lack of emotion and creativity  Degradation : This may not be as obvious of a downside as the ones cited above. But machines generally degrade over time.  No improvement with experience : AI can’t naturally learn from its own experience and mistakes. Humans do this by nature, trying not to repeat the same mistakes over and over again.  Reduced jobs for humans  Ethical problems 1 Garance Lamand M2 2023-2024 Good way to use AI : AI can be used to save humans time, energy and boring jobs (optimization). It can also be used in medicine, education and vocation training. One of the main advantage is to support human intelligence. There are several types of AI :  Assisted intelligence : is primarily used as a means of automating simple processes and tasks by harnessing the combined power of Big Data, cloud and data science to aid in decision- making. Another benefit is that by performing more mundane tasks, assisted intelligence frees people up to perform more in-depth tasks. The main goal of assisted intelligence is improving things people and organizations are already doing — so, while the AI can alert a human about a situation, it leaves the final decision in the hands of end users. The exception would be those cases in which a predetermined action has been clearly defined.  Augmented intelligence : which focuses on the technology’s assistive role. This cognitive technology is designed to enhance, rather than replace, human intelligence. This “second- tier” AI is often what people consider when discussing the overall concept in general, with machine learning capabilities layered over existing systems to augment human capabilities. Augmented intelligence allows organizations and people to do things they couldn’t otherwise do by supporting human decisions, not by simulating independent intelligence.  Autonomous intelligence : which processes are automated to generate the intelligence that allows machines, bots and systems to act on their own, independent of human intervention. The thought is that, like human beings, AI needs autonomy to reach its full potential. Some people use AI the right way but there may be a bad effect because AI promises a significant economic gain (more productive so more money).  Focus on economic gain. GDP will increase by 14% if AI progress continues AIs Increase consumers demand because now we have new technologies that we can use, there are personalize (they know us) and not only standardized. Because of this promise, it’s very attractive for certain companies to heavily invest in it because they will bring more economic gain. Companies who dominate in AI : The market is dominated for a long time by GAFAM. These companies have developed strategy to develop their own AI or to buy IA. All of them have an interest of staying in the market. Ex. Facebook acquired a startup which works in AI to develop VR, Apple strategy is to developed his own AI. For a long time, Apple was famous for Siri (2010), but now Alexa out performed Siri. Every new technologies has a major impact on market. The GAFAM dominate for a long time the market and after “OpenAI “came around… It was founded in 2015. OpenAI created chat GPT. Chat GPT was large in 2022 and amazing success in the first weeks.  Partnership : Microsoft invest 10 billions dollars in OpenAI. 2 Garance Lamand M2 2023-2024 Chat GPT : Chat GPT is very seductive tool but Chat GPT is improving and we know that it will not be correct all the time. Discussion : Do we trust the upgrade version ? Always a risk of error. If we know the risk of error, do we still use it ? Yes, it depends on the use. Chat GPT vs google search : Chat GPT is good to answer questions but not really to create something. Economic efficiency seems to come first.  The big issue is the reliance on AI. In the EU there is always the duty of human control and check of accuracy, but it is not always possible Others benefits : EU parliament stated that AI is there to assist people in helpful way : healthcare, transport, cheaper, optimization, more efficient labor, public service, improve sustainability and democratize security and safety. Bad side to use AI : AI can be used in a bad way : Damaging mental health, copyright, create insecurity, misdiagnosis in health sector (in radiology for example), discrimination (biased). Case Amazon discriminatory recruitment : The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars – much as shoppers rate products on Amazon, some of the people said. But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. That is because Amazon’s computer models were trained to vet applicants by observing patterns in résumés submitted to the company over a 10- year period. Most came from men, a reflection of male dominance across the tech industry. In effect, Amazon’s system taught itself that male candidates were preferable. It penalized résumés that included the word “women’s”, as in “women’s chess club captain”. And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said. The Seattle company ultimately disbanded the team because executives lost hope for the project. Others really messed up scenarios :  Voice simulation system : A woman receive a phone call and it was the voice of her daughter who need help. A man on the phone ask for money. When this woman was one the phone, her daughter was there. This is not a very cool way to use AI.  Actors and writers can be replace by a AI because it can recreate the image of an actor and it’s cheaper.  Female avatar sexually assaulted in Meta VR platform: Campaigners say the avatar of a 21- year-old researcher was sexually assaulted in Meta's virtual reality platform Horizon Worlds.  Social study people behavior : People behave differently in virtual reality ! But there is a human behind the AI 3 Garance Lamand M2 2023-2024  A use of AI in arm conflict : War in Ukraine kamikaze-drones, Ukraine needs 1000 drones to start with because there are cheap, reliable and they can function in an highly communication. Theses kamikas-drones will become more sophisticated.  Effect on environment and sustainability Should we regulate AI (discussion) ? 1. Why AI should NOT be regulated : a. Stifling Innovation and Progress : Regulations will slow down AI advancements. That not allowing companies to test and learn will make them less competitive internationally. b. Complex and Challenging Implementation : Regulations relating to world-changing technologies can often be too vague or broad to be applicable. This can make them difficult to implement and enforce across different jurisdictions. c. Potential for Overregulation and Unintended Consequences : Furthermore, we know that regulation often fails to adapt to the fast-paced nature of technology. AI is a rapidly evolving field, with new techniques and applications emerging regularly. 2. Why AI should be regulated : a. Ensuring Ethical Use of Artificial Intelligence : Regulation is needed to apply and adapt existing digital laws to AI technology. This means protecting the privacy of users (and their data). Regulation can help foster trust, transparency, and accountability among users, developers, and stakeholders of generative AI. b. Safeguarding Human Rights and Safety : Beyond the “basics”, regulation needs to protect populations at large from AI-related safety risks, of which there are many. Most will be human-related risks. Malicious actors can use Generative AI to spread misinformation or create deepfakes. c. Mitigating Social and Economic Impact  Regulations had to be balance with the risk in the market and fundamental right aspect. Chapiter I : Defining the objectives of the EU’s AI regulation : I. Introduction : 1. How did we get here ? AI as culminating point of the 4th industrial revolution. Discussions : What is progress ? : It’s evolution, It’s the movement to an improved or more developed state, or to a forward position. What is technologies ? : It’s a tool that human can use. It’s the practical application of scientific knowledge to solve real-world problems and improve human welfare. It aims to achieve either a commercial or industrial objective. What is technological progress ? : It is when innovations lead to technical developments, which results in growth in production within an economy. This progress leads to increased efficiency in the production process, enhances labor productivity, and impacts other factors of production, which causes overall economic growth. 4 Garance Lamand M2 2023-2024 There are many different types of progress. Progress is historical link to sciences. Scientific methodology is a complex notion. When we are talking about advancing in sciences. We can talk about different things : economical, professional, educational, methodological… All of these advances have a practical impact. Scientific progress can increase the effectiveness of the tools and technics but it can also intel social progress (ex. increase quality of life). The main idea is : the more sciences developed is, the more we will be able to see progress. How does development concretely happen ? There are many ways to explain how technological advancement happen but one of the ways is the integration of labor capital into relations. Labor capital (Marx idea): Marc wrote “The capital” and in his work, he analyses the idea of commodity production. Commodity in an object that we can use, produce and exchange in the market. For him, there are 2 conditions : 1. We have to have a place to exchange the a commodity 2. Social dimension : we need people to produce. Commodities have 2 types of values : The use value and the exchange value (price). Use value is something who is self-evidence. How do we determinate the price of something ? The labor theory of value (LTV) is a theory of value that argues that the economic value of a good or service is determined by the total amount of "socially necessary labor" required to produce it. A. From coal and steam to data : brief overview of the historic development of innovation and its impact on the markets law Key point : Each industry revolution is qualify by capital replacing labor. (Ex. Translation with AI). With AI, intelligence is replaced. Capital is always replaced labor, starts to replace manual labor and now is replacing the cognitive skills. How do we define industrial revolution ? it’s a multifactorial concept. It’s close to technological innovation. It’s revolution in which we use technologies to improve something in industrial context. Technologies mean increase industrial utilization. What was that technological innovation ? Manufacturing and transport. Throughout history, people have always been dependent on technology. Of course, the technology of each era might not have the same shape and size as today, but for their time, it was certainly something for people to look at. 5 Garance Lamand M2 2023-2024 People would always use the technology they had available to help make their lives easier and at the same time try to perfect it and bring it to the next level. This is how the concept of the industrial revolution began. Right now, we are going through the fourth industrial revolution, aka Industry 4.0, Four industrial revolution/ Four periods : 1st IR : Mechanical period 2nd IR : Electrical period 3rd IR: Digitalization period 4th IR: Big data period During the first two revolutions, we replace human skills by automatizing production processes (standard product). The last two are characterized by connectivity (internet) and customization (now, products can be customized to our needs). To get back to this labor capital idea, the train to mid- century to now are totally different. The first industrial revolution : It’s a mechanical revolution that started in England. Factories have replaced manual labor and from England this factorization start to spread out. Dominated by machine based production and we already start to see massive production over small scale production. Massive production is very specific of the first revolution. One of the factor that contributed to the first blooming is the innovation in the transport (coal and steal). Coal and steal remplace animals and human : faster, easier, cheaper.  Mechanization The second industrial revolution : It called the electrical revolution. We discovered new ways to use natural resources (oil, electricity and steal). This period is characterized by 2 mains concepts : advancement in electricity (new sources of energy) and sources use for mass production. Unlike the first revolution, associated with united stated. Factors why US is leader ? Migration : 40 millions people and immense human resources available on the territory. Ex. Ford implemented mass productive approach. The allow for a lot of products.  The system of mass production faster mass conception. What are the characteristics of machines period ? Massive production, profits, oil for energy. What happens to the people replace by machines ? The technologies will replace the people but the people will be more educate. Summarize : For the first and the second, production progress were more effective, generate more gains and automatic. The first commun point is that physical capital against capital labor but human capital were intact. The effect is to replace labor but also to create labor. Creation of workers with news skills (ex. Doctor not disappear but have to learn new things). Workers are push to become more educative and more specialized. 6 Garance Lamand M2 2023-2024 Ex. Between 1910 and 1940 big increasement of enrollment of students. Over all people are push to become smarter and more specialized. Immediate consequences of technological : New human needs What is the initial reaction of people ? Resistance (the Luddist from Ned Ludd) The third revolution : Connectivity sources becoming a thing. Digitalization start to appear. Miniaturization and shift of the way commodity are product and use. We now have more programable machines (robots). + Computers, internet, social networks. The fourth revolution : AI, bit coins, 3D printing. It’s a continuation of the third because at some point around 2010 there was the industrial internet (in each revolution we discover thing (steal, electricity, internet) and discuss about the way to use this new thing.) How do we bring digitalization to industrial operation and how do we transform the business model ? This kind of change. B. The variety of currently available data-powered technologies and prospects for future innovation and development. Example France : indutrie du futur : (i) cutting edge technologies (3D printing, internet and things and virtual/augmented reality) (ii) assisting French companies for their digital transformation (iii) offering training for overcoming the probl in the requirement of new job related knowledge and skills (iv) strengthening international collaboration among other EU member/non member states (v) encouragement of IdF Example of China : Strategic sectors for smart manufacturing. - New information technology - Numerical control tools - Aerospace equipement - High-tech ships - Railway equipement - Energy saving - News materials - Medical devices - Agricultural machinery - Power equipement China is more focus on profit and economic. France is focus on the people which be able to use the technologies. Principles of the 4th IR : - Interoperability - Decentralization - Virtualization - Real-time capacity 7 Garance Lamand M2 2023-2024 - Modularity - Service orientation Frey & Osborne (2013) - Computers increasingly challenge HR labour in a wide range of cognitive tasks (ie computarisation of non-routine tasks) - Out of 702 examined professions, F&O predict that 47% of those professions are at hight risk of automation II. Defining the operative concepts : Recap : 4th indust. Revolution as data revolution. The old meets the new : new technologies challenging existing regulatory frameworks. Megadata’/big data’/massive data : A large set of data that no conventional database or information management tool can work with. Stats : We generate about 2.5 trillion bytes of data every day (emails, videos, GPS signals, transactional records from online purchases…). Comparable or different from the previous one ? : With every revolution, there is a new type of capital that replace human labor. Machines in 4th revolution still replace people but the type of human skills replaced is the new thing : Cognitive skills.  Advantages : Access to massive databases in real time  Disadvantages : Data manipulation/consumer manipulation (eg speculative market ventures). Technologies having emerged as a consequence of the “Big Data” phenomenon : - Storage technologies : eg. peployment of cloud computing - Processing technologies : eg. Development of new databases adapted to unstructured data and the development… Villani, For a meaningful AI Towards A French and European Strategy (2018). At 22 : “Data is the raw material AI and the emergence of new uses and applications depends on it”. Data is not in nature. It’s a difference with others revolution. People are maybe more venerable. A. The concept of AI ? 1. Artificial ? AI is artificial because it’s not natural. Conceptual distinction b/physis (nature or reality) and logos (intellect and/or consciousness). Static quality Artificially built-in naturalness Dynamic quality (Cf. Turing test) Cf. Romporti (2019) : Cf. Romporti (2019) : AI system Simulation of Humans as a Artificiality as a static quality of are non-natural agents capable dynamic process, occurs in the AI, resides in the physis-realm of replicating cognitive process logos-realm, as non-natural since the agent – material or qualified as natural agents are increasingly able to develop Human skills like 8 Garance Lamand M2 2023-2024 immaterial – is not natural per seeing recognizing and se. manipulating an object speech recognition etc.. = physical, tangible that we need to be functionable. Not = Mind dimension is artificial. found in nature. It’s self- It’s because we have AI that can evidence that AI could use in stimulate the reasoning of body itself. humans. In the new technologies, we only have the static quality aspect and do not have the logos dimension, the mind dimension. Now we are moving in the mind dimension. 1. Intelligence ? Intelligence is complicate concept to define and to quantified. It’s not a clear cut concept. Generic type of intelligence. Associated with people and not animals, different types of intelligence (problem solving and adapting to context). a. Comprehension : Zadeh, Power (2019) : a joined capacity to comprehend and anticipate how the things may plausibly behave or change now or in one step forward, and encountering these intuitions to decide the necessary actions/models to make. When we are in a context, we understand the context and we can anticipate the outcome. b. Managing unknows : Penachin, Goertzel (2007) : navigating in, and manipulating an environment that remains entirely or partially unfamiliar to the agent. Intelligence entity are capable to making decisions in context where they don’t know all the variables (Eg. Run in the wood and get lost). 2. Definition of AI : AI Act Com (2021) 206 final : Art 3 §1 – AI systems means software that is developed with one or more of the techniques and approaches (…) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. EC JRC Flagship report on AI : AI is a generic term that refers to any machine or algorithm that is capable of observing its environment, learning, and based on the knowledge and experience gained, taking intelligent a ction or proposing decisions. There are many different technologies that fall under this broad AI definition. At the moment, ML4 techniques are the most widely used. EU AI strategy COM (2018) 795 final : AI refers to systems that display intelligent behaviour by analyzing their environment and taking action with some degree of autonomy to achieve specific goals. Community survey on ICT usage and e commerce (2021) : AI refers to systems that use technologies such as : text mining, computer vision, speech recognition, natural language generation, machine learning, deep learning to gather and/or use data to predict, recommend or decide, with varying levels of autonomy, the best action to achieve specific goals. (What AI does) 9 Garance Lamand M2 2023-2024 HLEG : AI refers to system designed by human that given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behavior by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques such machine learning(of which includes planning, scheduling, knowledge representation and reasoning, search, and optimization) and robotics (which includes control, perception, sensors and actuators as well as the integration of all other techniques into cyber physical systems). Scholars – Y. Bathee. At 898 : AI refers to a class of computer programs designed to solve problems requiring inferential reasoning, decision-making based on incomplete or uncertain information, classification, optimization, and perception.  Common characteristic : Intelligence aspect (comprehension & managing unknows), reference to artificiality, Just one definition mention human (AI act), Autonomy. One definition mention Human, the others do not. 2. Taxonomies of AI, functionalities and applications : Common features : AI can simulate human cognitive skills that can include :  Perception : Shape/image recognition in the environment, including the consideration of the real world complexity  Info processing : collecting and interpreting inputs (in form of data)  Decision making : (Including reasoning and learning) : taking actions, performance of tasks (including adaptation, reaction to changes in the environment) with certain level of autonomy.  Goal attaining : specialization : discovery of efficient ways of reaching preassigned goals. The issue is the autonomy (self-governance) : The meaning of autonomy in everyday language seems to require something above and beyond sophisticated autonomation (Cf. Thorisson, Helgasson 2012). What made an AI system autonomous ? From what and from whom ? AI are capable of doing perception, info processing, decision making, goal attaining with little or no human intervention. If we focus on the concept of autonomy, we could say that generally speaking AI is intelligence because it includes 2 types of autonomy for sure (Cognitive and decisional) Cognitive autonomy, decisional/predictive. What about moral autonomy ? Morality is a big issue : Amazon algorithm based on historical data recruitment of Amazon and AI recruit only white men. Solum wrote on that and he said that AI misses something. It’s a moral agency. 10 Garance Lamand M2 2023-2024 Muller, Bostrom (2016) : AI system will probably (50%) reach overall human ability by 2040-2050 and very likely (with 90% probably) by 2075 from reaching human ability, it will move on the superintelligence in 2 years (10%) to 30 years (75%) thereafter. Experts say that probability in 31% that this development turns out to be bad or extremely bad for humanity. We have this debate on how intelligent are AI system but triggered some futuristic prevision. A few years ago, we were in the artificial narrow intelligence. Now we are in the age of general intelligence. In a couple of years, we should arrive in super intelligence. System be better that human ever be. Key stages in AI programing This is the pattern. We start with the goal after we have to gather and clean the data. We have our baby AI that we can trained. The model have already data during the train we expose the model to new data to test it. The quantity training data, the sample is something that we can control. The smaller the sample is, the system might be more accurate in sense of the goal but it might be more biased. The bigger the training data sample is, the bias decrease but the variance increase. It means that the system can make correlation that not relevant to your goal. Once you are happy with you baby AI, you validate it and after you can release it on the market. Eg. Recruitment algorithm, one of the factor choose is the hair color (with have nothing to do with everything). That is the over fitting is consider at relevant, a criteria that prima facie is no the goal. However, recruitment is by definition a bias system. The point is preference need to be based on 11 Garance Lamand M2 2023-2024 relevant factor. A well performing model is a model that can find between under fitting and over fitting. In term of variance an bias, there is always a risk. What is Bias ? Amount of assumptions/prejudices your model has making against a certain problem that you’re trying to frame. (More assumptions > higher bias >Underfitting). Solutions : train the model more, increase model complexity, try new architecture model) What is Variance? Sensitivity of the model to the training data (More sensitivity > Higher variance > over fitting). Solutions : introduce more date, use regularization, try new architecture model) Example of model of resilience – One of the simple way to distinguish model : Are there interpretable or not ? 1. Linear (interpretable) models : Interpretable means the more linear a model is the more transparent it will be. The linear rule based model are conditional (If A then B) (Eg. Tree, KNN, Fuzzy logics, Bayesian networks). 2. Non-linear (less/non-interpretable) models : The thing with the forest is that we have several layers. In forest, we have several trees who interact. That make the complexity of the model much grader. In term of efficacy, the model is probably very good. But increase complexity also decrease the possibility of human explanation. The black box scenario is associated to complicated AI system. = Not fully transparent. Opens questions … Are AI systems agents ? Louis Marx & Co and Gehrig Hobon & Co. Inc v. United States case (40 Cust. CT 610, 610 (1959)) : “A robot is a mechanical device or apparatus, a mere automation, that operates through scientific or mechanical means”.  There are smarter than people. Should we consider them as agent or not ? Should we consider them as product or not ? Is human control always possible ? Columbus-America Discovery Group, Inc v. The Unidentified, Wrecked, and Abandoned Vessel, 5.5 Central America (1989 A.M.C. 1995 (1989), telepossession test) : “(robots were) able to generate live images of the wreck and had the further capability to manipulate the environment at the direction of people”. Case from 1989 group of scientist who use a robot to explore a shipwreck. The robot stumbled upon a discovery (like a treasure) that nobody was looking for. There were many legal questions in this case but one of the them was if the robot (ie the humans controls it when it made the discovery) was human control and foresight possible. Obviously, the answer is no. This was a case of unpredicted consequence of use of AI : This case also established a principle that we still have today and that we will be studying in relation to the regulation of AI in the EU > Presumption that human control and direction is always possible even if practically it might not be. It means that a human will always be responsible even in cases where the AI may have caused a harm by itself or in a way that a human cannot explain the cause of the harm. That's debatable because 12 Garance Lamand M2 2023-2024 Why a user should take the blame for a harm that his system caused that he have no idea how it caused it ? But consider the alternative : Let's say that AI systems are agents and you say I was discriminated by your AI. Who's going to compensate you for your discrimination ? The AI ? no. You always need a human somewhere and this is why we need to presume the direction of of people. B. Regulation : 1. Regulation tout court : Altering behaviour (Moses Bridge) : It's a racist project made by the architect called Robert Moses. At the beginning of the 20th century, this was a road that linked New York to the New Jersey beaches. Robert Moses apparently was a racist bastard and he instructed his engineer to really build his bridges very low so that buses are transporting or carrying black people and essentially poor immigrants could not pass through.  Is this regulation ? Regulation shapes the behavior. Law is a type of regulation but regulation can come can take many different shapes and forms. A lot of things can regulate you like the height of the bridges. Black (2014), Yeung (2018) : Regulation (or regulatory governance) translates to international attempts to manage risk or alter behavior in order to achieve some pre-specified goal.  Dist. b/ regulation of AI and AI (algorithmic) regulation  Regulation by rules/principles/standards. The concept of ‘regulation by design’ (Human-AI regulatory frameworks). 2. Regulation of AI : Action 2 in AI White Paper, COM(2020) 65 final : AI Watch ‘Estimating investlent in General Purpose Technologies : The case of AI Investment in Europe’ Example 1. Hacker 2018, Grozdanovski (2021 CMLRev.): Basic programming stages of MIL systems : - Labelling data - Training (bias and variance) - Validation (testing performance in practice) 13 Garance Lamand M2 2023-2024 2018 Amazon’s recruitment algorithm was biased against women. The big question : How did the algorithm learn to be biased? To establish a general rule that will apply to systems presence and future and that will ensure that they would be biased neutral, it's it's kind of impossible and still you have to do it… The teacher was at a conference in Spain two weeks ago and he learned about colleagues projects in the Netherlands and they are trying to figure out what associations some systems make that would be gendered right and it'very surprising because they found out that like the sit well they system the system that they studied tends to associate the word yoga with woman because it assumes that people who do yoga are mostly women so you know in that sense it makes associations that are kind of surprising sometimes. Example 2 : Tesla predictive error : Cf. Tesla, Inc v. McKechnie Vehicule Components USA, Inc. Et al, California Northen District Court, Case No 5:21-cv-01962-BLF. Question : Could the accident have been avoided ? Who should repair the harm ? Tesla is smart about that and don’t want to be responsible. So ask to keep hands on the wheel. They do not take any responsibility. Example 3 : Algo trading : United States v. Coscia, 866 F.3d 782, 786 (7th Cir. 2017) : Algorithm capable of spoofing (ie ability to place phantom orders in the market). Question : how to apply in the intent-to-harm (criminal liability) test ? In this case, we have an algorithm that was capable of trading and it turns out that the algorithm learns how to spoof. Spoofing is a manipulation of the financial market. The algorithm learns that it could place phantom orders in the markets influence the market to go in a certain direction then withdraw with through the phantom orders which obviously caused a lot of investors a loss of money. The trouble is that spoofing is a criminal charge in the United States so if you spoof you might go to jail obviously cannot lock up you know the algorithm. American law requires the proof of intent to harm but AI aren’t agents and they cannot have intents. They were looking for a human who can display intention to harm. The first people that they went to were the programmers but they mentioned that it was the user, the company, that commissions the algorithm who told them specifically “I want a system capable of spoofing”. Example 4. Art : Originality warrants Copyright (Originality = intellectual creation that reflects the author’s personality, no other criteria such as merit or purpose being taken into account e.g. Dir. 2006/116 preamble, pt 16). Question 1. Is the AI-generated artwork original ? Question 2. Who should be the holder of authorship rights ? 14 Garance Lamand M2 2023-2024 The right regulatory approach = the right balance between objectives : The issue is how do you getting back to the regulation how do you shape the human agency in all of these cases ?Whose agency are you trying to shape ? Is it the programmers is it the users. The issue with regulation is not to have a perfect regulation but maybe to have a balance or a workable regulation. >market rationality and values. Challenges : - Diversity of AI (General or sectoral regulation) ? - Diversity of actors (Programmers, users, deployers, cf. COM(2021) 206 final) - Unpredictability of future innovation - Lack of clarity on duties (Supervision, control, use, accountability, liability) - Contradictory objectives (Benefits from increased market use vs protection of human agents) 3. Regulation through AI: Rechtbank Den Haag, 5 Feb. 2020, ECLI:NL:RBDHA:2020:865. They used an AI that was biased and turned out to be discriminating of people having 2 nationalities. US District Court (Eastern District of Michigan – Southern Division), Cahoo et al. v. Fast Enterprises et al., case n° 17-10657. C. Selecting the objectives of AI in the EU (and the world) Rights-based approach vs Risk-based approach: The 2 approaches are not incompatible. The risk that AI poses is the risk to fundamental rights. The dichotomy is in the field of AI, a bit paradoxical: Usually when we regulated risk, we follow product safety logic. For example, if we are using a chemical substance, we know the dosage toxic for human and we can regulate the dosage of the substance. This means that the risk-based approach is usually associated to products that cause problem to health and environment. With intelligence problem, it’s different. Mostly, the risk associated with AI are not the risk that we usually see in other areas of law. The AI act try to merge the two. The AI act departs from the idea that AI is risky regarding the fundamental right, and we will try to find balance between the two approaches. i. AI as a threat to human dignity (and its variants) AI misses something! (see supra). Moral reason! AI made an efficiency decision, but AI is not necessarily aware of the discrimination. Protection of human dignity: Why is it so important? AI can process a lot of data about us even data that there are not supposed to process. The EU commission institute a high-level expert group on Ethics in AI: The point of departure was the idea of human dignity. a. Autonomy: In the guidelines the HLEG said that the first thing is the autonomy because this is the first point that according to the HLEG that we need to protect: human need to continue to take decision for 15 Garance Lamand M2 2023-2024 themselves. Issue of reliance: AI should continue to be a tool, but we need to keep autonomy in our decision. HLEG, Ethics Guidelines (2019) at 10: “human beings should remain free to make decisions for themselves. This entails freedom from sovereign intrusion, but also requires intervention from government and non- governmental organizations to ensure that individuals or people at risk of exclusion have equal access to AI’s benefits and opportunities.” b. Integrity: Integrity because AI should not course us, force us to make decision. There are vulnerable groups that need specific protections (Ex. Deep fake). We try to protect vulnerable groups, especially children and elderly people. HLEG, Ethics Guidelines (2019) at 12: “Human interacting with AI systems must be able to keep full and effective self-determination over themselves and be able to partake in the democratic process. AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills.” c. Fairness (enjoyment of fundamental entitlements and protection against unfair biases) Fairness is perceived essentially as diversity, equal access and non-discrimination. The goal (essentially an idea of fairness): achieve human flourishing, enhanced individual and societal well- being as well as progress and innovation. HLEG, Ethics Guidelines (2019), at 12: substantive fairness implies a commitment to “ensuring equal and just distribution of both benefits and costs and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatization.” ii. Choosing the regulatory approach: 1. The Ethics first approach in the EU: human dignity as cornerstone of the EU’s regulation of AI 16 Garance Lamand M2 2023-2024 Based on this principle, the HLEG selected four keys principles that lay down the ground for the legal act: 1. Respect for human autonomy 2. Prevention of harm 3. Fairness 4. Explainability. In the sense of the ethics guidelines are tied to transparency Corresponding to the principle the HLEG selected 7 keys actions EU choose very clearly the right-approach/ right-ethics approach. It’s not the case everywhere. EU want to be a pioneer in the regulation of HR in AI. 2. Examples of ‘markets first (rights second)’ trends outside the EU USA. With Trump, the executive order was that the USA must lead the AI race. There were several principles and actions like the EU but none of them have things with HR protection… Then came Biden, in 2020, they started to focus more on the HR aspect of AI. Recently, there has been a blueprint for an AI Bill of Rights with principle like safety, privacy, explainability, human control that gave a HR frame. Currently, the Algorithmic Accountability Act is in discussion, it should lay down a framework for basic principles. China: The strategy of China was also market: “We will be the leader of AI”. In 2023, they adopted the Deep Synthesis for vulnerable people that focusses on deep fake and vulnerable people. They do not follow the EU trend (One regulation for all AI systems (horizontal regulation)). China is more sectorial. There is no ambition to have one law to regulate all the AI. In the USA, the blueprint and AA Act is federal but there is a possibility to have a national act. These two acts are general but in the EU, we are more ambitious on that point. 17 Garance Lamand M2 2023-2024 3. The regulatory conundrum: protection (of rights) vs prevention (of risks) We are going to try to find a balance between a risk-based approach and a rights-based approach. The AI acts clearly said that it follows a risk-based approach. But at the same time, we will protect human dignity. This strikes the balance, finding the best of both worlds. In the EU, a risk-based approach... AI Act COM (2021) 206 final Pt 2.3. The proposal builds on existing legal frameworks and is proportionate and necessary to achieve its objectives, since it follows a risk-based approach and imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. For other, non-high-risk AI systems, only very limited transparency obligations are imposed, for example in terms of the provision of information to flag the use of an AI system when interacting with humans. For high-risk AI systems, the requirements of high-quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI and that are not covered by other existing legal frameworks.”... with an emphasis on fundamental rights/values Pt 15: Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non- discrimination, data protection and privacy and the rights of the child. The EU commission when decided to regulate was confronted to 5 options: 18 Garance Lamand M2 2023-2024 General Data Protection Regulation Chapter 2. The template for the EU’s AI regulation: the GDPR and its progeny General Law Is the GDPR becoming Lex Generalis in AI regulation & adjudication (the ‘Law of Everything’, cf. Purtova, Law, Innovation & Technology, vol 10 n°1). I. The design of the GDPR Data processing (Art 4(2) GDPR): ‘Processing’ means any operation or set of operations which is performed on personal data or on sets of personal data, whether by automated means, such as collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction. The GDPR is fundamentally a people protecting instrument because it deals with personal data. Scope of application: En raison Ratione personae (Art 1(1)): This Regulation lays down rules relating to the protection of natural de la persons with regard to the processing of personal data and rules relating to the free movement of personne personal data. En raison Ratione materia (Art 2 (1)): This Regulation applies to the processing of personal data wholly or partly de la by automated means and to the processing other than by automated means of personal data which matière form part of a filing system or are intended to form part of a filing system. En raison Ratione loci (Art 3(1))/ Art 3(2)): du lieu 1. This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not. 2. This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or the monitoring of their behavior as far as their behavior takes place within the Union E.g. Google and Facebook, headquarters aren’t in EU. Principles: Preamble, pt 1: The protection of natural persons in relation to the processing of personal data is a fundamental right. Article 8(1) of the Charter of Fundamental Rights of the European Union (the ‘Charter’) and Article 16(1) of the Treaty on the Functioning of the European Union (TFEU) provide that everyone has the right to the protection of personal data concerning him or her. 19 Garance Lamand M2 2023-2024 Art 4 (1) : ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person; Limit: Art 9(1)): Processing personal data pertaining to racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation shall be prohibited. Condition for data protection is consent: Art 4(11): ‘Consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her. NB : il faut lire le GDPR Consent (Art 7(2)): (…) written declaration (..), the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form. E.g. Facebook is sometimes sneezy about the lawfulness of the data protection. Actors: Art 4 (8): ‘processor’ means a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller. Art 4(7): ‘controller’ means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law, the controller or the specific criteria for its nomination may be provided for by Union or Member State law. It's important to know who the controller is because it’s the controller who need our consent. The regulatory approach: The GDPR like the AI act, the AI act departs from the AI dignity, but we look at the structure of the GDPR, we can see the same logic. The GDPR relied on 3 families on principles: 1. Lawfulness, fairness, transparency & accountability 2. Proportionality & necessity 3. Accuracy, integrity & confidentiality 20 Garance Lamand M2 2023-2024 First family of principle: When is data processing lawful and fair? Art 6: When the data processing complies with any applicable legislation either national or European. Art 7: When there is consent given Art 8: Minors consent – protection of vulnerable group! Art 9: Protected characteristic Second family of principle: It’s expressed in specific provisions. The purpose, the controller who define the personal data need to have specific purpose in mind (purpose limitation – Art 6. 3. (d)). We need to know and to be aware. The data need to be store for a limited period (storage limitation – Art 5. 1. (e)). Third family of principle: The data collected should be accurate. if we think that the data is irrelevant, we should be able to go to the controller and ask to erase this. How do we operationalize that? These 3 families of principles are translated into rights: We have several principles that we based the data protection on. With these principles, we translated them into rights. Those principles must be translated into rights. 21 Garance Lamand M2 2023-2024 The first family of principle is translated into the right of transparency: The controller must tell us the purpose, why they collected our data and why we consent to. The second family principle is translated into the right to restriction: the collected data need to be collected for the purpose mention by the controller. The third family of principle is translated into the right to ask for rectification and erasure, right to object. Obligations: Rights are accompanied by obligations. Check that rights are safeguarded. People who share the data have the rights, but the platform have also obligations. Obligation of processors (art 28): The processor is in contractual relationship with the controller, and they must be governed by the contract and any applicable law.  not engage another processor without prior specific or general written authorization of the controller.  processing governed by a contract or other legal act under Union or Member State law, that sets out the subject-matter and duration of the processing, the nature and purpose of the processing, the type of personal data and categories of data subjects and the obligations and rights of the controller.  Adhere to an approved code of conduct. Obligations of controllers (art 24):  implement appropriate technical and organizational measures to ensure and to be able to demonstrate that processing is performed in accordance with the GDPR.  implement appropriate data protection policies;  adhere to approved codes of conduct. Data protection by design and by default (Art 25) – O° of controllers  (by design) implement appropriate technical and organizational measures, such as pseudonymization, which are designed to implement data-protection principles, such as data minimization, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of the GDPR.  (by default) implement appropriate technical and organizational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual's intervention to an indefinite number of natural persons. De Gregorio & Dunn, “The European Risk-Based Approcahes : connecting constitutional dots in the Digital Ages 22 Garance Lamand M2 2023-2024 At 481. “Obligations may, therefore, be objectively “uneven”, reflecting the interests of the actors called to comply with the GDPR, but this different outcome is justified in that it is the consequence of a specific balancing test operated directly by data controllers based on the principle of accountability. This last aspect, which is precisely what characterizes the GDPR as a bottom-up risk-based regulation, where the balancing between interests is made directly by the targets of regulation rather than by the law, emerges from a range of different provisions. The GDPR, for instance, introduces the requirement that controllers carry out a data protection impact assessment (DPIA) whenever a specific type of processing is likely to result in a “high” risk to the rights and freedoms of natural persons.” The aim of the GDPR is to prevented risks to your FR but the GDPR don’t define risk. The GDPR is a risk based regulation like the AI but in the approach is different because it’s bottom up. The AI act is top down because the AI is defined the risk. GDPR – Bottom-upperspective AI Act – Top down basis The evaluation of risk and the choice of It’s the AI Act itself that identifies directly the mitigating measures are not defined by the law. various categories of risk.  Left to the discretion of the targets of  Does not leave the task of evaluating regulation themselves : in that case, to such risk scores to the targets of data controllers and processors regulation II. What does the CJEU say ? Quid personal data ? CJEU, 20 December 2017, Peter Nowak, Case C-434/16 Background : Mr Nowak was trainee accountant who passed first level accountancy examens and three second level examens set by the Institute of Chartered Accountants of Ireland (CAI). However, Mr Nowak failed subsequent exams and in 2010, he requested that all personal data held by the CAI be transmitted to him. The CAI conceded that they did not have any of Mr Nowak’s personal data. He then contacted the Data Protection Commissioner with a view of challenging the reason given for the refusal to disclose his examination script. The Data protection Commissioner replied to him by email to state the ‘exam scripts do not generally fall to be considered (for data protection purposes). Procedure : Supreme Court (Ireland) submitted 2 questions for preliminary ruling (Art 267 TFEU). Q.1. Is information recorded in/as answers given by a candidate during a professional examination capable of being personal data, within the meaning of Directive 95/46 ? Q.2. if the answer to Question 1 is that all or some of such information may be personal data within the meaning of the Directive, what factors are relevant in determining whether in any given case such script is personal data, and what weight should be given to such factors?’ Decision of the CJEU : 23 Garance Lamand M2 2023-2024 37. First, the content of those answers reflects the extent of the candidate’s knowledge and competence in a given field and, in some cases, his intellect, thought processes, and judgment. In the case of a handwritten script, the answers contain, in addition, information as to his handwriting. 38. Second, the purpose of collecting those answers is to evaluate the candidate’s professional abilities and his suitability to practice the profession concerned. 39. Last, the use of that information, one consequence of that use being the candidate’s success or failure at the examination concerned, is liable to have an effect on his or her rights and interests, in that it may determine or influence, for example, the chance of entering the profession aspired to or of obtaining the post sought. 46. Further, the question whether written answers submitted by a candidate at a professional examination and any comments made by the examiner with respect to those answers should be classified as personal data cannot be affected, contrary to what is argued by the Data Protection Commissioner and the Irish government, by the fact that the consequence of that classification is, in principle, that the candidate has rights of access and rectification, pursuant to Article 12(a) and (b) of Directive 95/46. 51. Further, it is clear that the rights of access and rectification, provided for in Article 12(a) and (b) of Directive 95/46, may also be asserted in relation to the written answers submitted by a candidate at a professional examination and to any comments made by an examiner with respect to those answers. 52. Of course, the right of rectification provided for in Article 12(b) of Directive 95/46 cannot enable a candidate to ‘correct’, a posteriori, answers that are ‘incorrect’. 53. It is apparent from Article 6(1)(d) of Directive 95/46 that the assessment of whether personal data is accurate and complete must be made in the light of the purpose for which that data was collected. That purpose consists, as far as the answers submitted by an examination candidate are concerned, in being able to evaluate the level of knowledge and competence of that candidate at the time of the examination. That level is revealed precisely by any errors in those answers. Consequently, such errors do not represent inaccuracy, within the meaning of Directive 95/46, which would give rise to a right of rectification under Article 12(b) of that directive. Quid controller (i.e. to whom should consent be given) ? CJEU, 10 July 2018, Jehovan todistajat, case C-25/17, EU:C:2018:57 Background : Data Protection Supervisor the Finnish Data Protection Board adopted a decision prohibiting the Jehovah’s Witnesses Community from collecting or processing personal data in the course of door-to-door preaching carried out by its members unless the legal requirements of national law were met. The Jehovah’s Witnesses Community argued it was not a controller of personal data and that its activity did not constitute unlawful processing of such data. 24 Garance Lamand M2 2023-2024 Procedure : The Korkein hallinto-oikeus (Supreme Administrative Court) submitted 4 questions for preliminary ruling (Art. 267 TFEU) Q3. Relevant for us : Must the phrase “alone or jointly with others determines the purposes and means of the processing of personal data” appearing in Article 2(d) of … Directive [95/46] be interpreted as meaning that a religious community that organises an activity in the course of which personal data is collected (in particular, by allocating areas in which the activity is carried out among the various preachers, supervising the activity of those preachers and keeping a list of individuals who do not wish the preachers to visit them) may be regarded as a controller, in respect of the processing of personal data carried out by its members, even if the religious community claims that only the individual members who engage in preaching have access to the data that they gather? Decision of the CJEU : 70. In the present case, as is clear from the order for reference, it is true that members of the Jehovah’s Witnesses Community who engage in preaching determine in which specific circumstances they collect personal data relating to persons visited, which specific data are collected and how those data are subsequently processed. However, as set out in paragraphs 43 and 44 of the present judgment, the collection of personal data is carried out in the course of door-to-door preaching, by which members of the Jehovah’s Witnesses Community who engage in preaching spread the faith of their community. That preaching activity is, as is apparent from the order for reference, organised, coordinated and encouraged by that community. In that context, the data are collected as a memory aid for later use and for a possible subsequent visit. Finally, the congregations of the Jehovah’s Witnesses Community keep lists of persons who no longer wish to receive a visit, from those data which are transmitted to them by members who engage in preaching. 73. In the light of the file submitted to the Court, it appears that the Jehovah’s Witnesses Community, by organising, coordinating and encouraging the preaching activities of its members intended to spread its faith, participates, jointly with its members who engage in preaching, in determining the purposes and means of processing of personal data of the persons contacted, which is, however, for the referring court to verify with regard to all of the circumstances of the case. CJEU, 29 July 2019, Fashion ID, Case C-40/17, EU:C:2018:1039 Background : Online clothing retailer, embeds on its website the like social pluging from Facebook. The operator of a website embedding third-party content onto the website cannot control what data the browser transmits nor what the third party does with those data. When a person visits the website of Fashion ID, their personal data are transmitted to Facebook Ireland. The visitor is not aware of this regardless of whether they are a member of Facebook or have clicked on Facebook’s like button. German public service association tasked with safeguarding the interest of consumers considered the Fashion ID transmitted personal data to Facebook without ? Procedure : Oberlandesgericht Düsseldorf (Higher Regional Court, Düsseldorf) submitted six questions for preliminary ruling to the CJEU (Art. 267 TFEU) 25 Garance Lamand M2 2023-2024 Relevant questions : Q2 &5 Q2. In a case such as the present one, in which someone has embedded a programming code in his website which causes the user’s browser to request content from a third party and, to this end, transmits personal data to the third party, is the person embedding the content the “controller” within the meaning of Article 2(d) of Directive [95/46] if that person is himself unable to influence this data-processing operation? Q5. To whom must the consent to be declared under Articles 7(a) and 2(h) of Directive [95/46] be given in a situation such as that in the present case? Decision of the CJEU : 67. Furthermore, since, as Article 2(d) of Directive 95/46 expressly provides, the concept of ‘controller’ relates to the entity which ‘alone or jointly with others’ determines the purposes and means of the processing of personal data, that concept does not necessarily refer to a single entity and may concern several actors taking part in that processing, with each of them then being subject to the applicable data-protection provisions. 76. In view of that information, it should be pointed out that the operations involving the processing of personal data in respect of which Fashion ID is capable of determining, jointly with Facebook Ireland, the purposes and means are, for the purposes of the definition of the concept of ‘processing of personal data’ in Article 2(b) of Directive 95/46, the collection and disclosure by transmission of the personal data of visitors to its website. By contrast, in the light of that information, it seems, at the outset, impossible that Fashion ID determines the purposes and means of subsequent operations involving the processing of personal data carried out by Facebook Ireland after their transmission to the latter, meaning that Fashion ID cannot be considered to be a controller in respect of those operations within the meaning of Article 2(d). 78. Moreover, by embedding that social plugin on its website, Fashion ID exerts a decisive influence over the collection and transmission of the personal data of visitors to that website to the provider of that plugin, Facebook Ireland, which would not have occurred without that plugin. 79. In these circumstances, and subject to the investigations that it is for the referring court to carry out in this respect, it must be concluded that Facebook Ireland and Fashion ID determine jointly the means at the origin of the operations involving the collection and disclosure by transmission of the personal data of visitors to Fashion ID’s website. 101. In the present case, while the operator of a website that embeds on that website a social plugin causing the browser of a visitor to that website to request content from the provider of that plugin and, to that end, to transmit to that provider the personal data of the visitor can be considered to be a controller, jointly with that provider, in respect of operations involving the collection and disclosure by transmission of the personal data of that visitor, its duty to obtain the consent from the data subject under Article 2(h) and Article 7(a) of Directive 95/46 and its duty to inform under Article 10 of that directive relate only to those operations. By contrast, those duties do not cover operations involving the processing of personal data at other stages occurring before or after those operations which involve, as the case may be, the processing of personal data at issue. 26 Garance Lamand M2 2023-2024 102. With regard to the consent referred to in Article 2(h) and Article 7(a) of Directive 95/46, it appears that such consent must be given prior to the collection and disclosure by transmission of the data subject’s data. In such circumstances, it is for the operator of the website, rather than for the provider of the social plugin, to obtain that consent, since it is the fact that the visitor consults that website that triggers the processing of the personal data. As the Advocate General noted in point 132 of his Opinion, it would not be in line with efficient and timely protection of the data subject’s rights if the consent were given only to the joint controller that is involved later, namely the provider of that plugin. However, the consent that must be given to the operator relates only to the operation or set of operations involving the processing of personal data in respect of which the operator actually determines the purposes and means. Standards of protection of data subjects’ rights 1° Standards applied to data processing within the EU CJEU, 4 july 2023, Meta Platforms et al., case C-252/21, EU:C:2023:537 Background : To collect and process user data, Meta Platforms relies on the contract for the use of the services entered into with its users when they click on the ‘Sign up’ button, thereby accepting Facebook’s terms of service. Acceptance of those terms of service is an essential requirement for using the Facebook social network. Facebook had developed the practice of collecting data from other group services (Instagram and WhatsApp), as well as from third-party websites and apps via integrated interfaces or via cookies placed on the user’s computer or mobile device, linking those data with the user’s Facebook account and then using them. Procedure : German Federal Cartel Office brought proceedings against Meta asking them to adapt the general terms in such a way that it made it clear that those data will neither be collected, nor linked with Facebook user accounts nor used without consent of the user concerned, clarifying that such a consent is not valid if it is a condition for using the social network. The competition law argument was ‘abuse of dominant position’ Where is the abuse here ? Facebook consent in the GDRP should be clear, specific and clarified. Facebook are imposing some position to users. Oberlandesgericht Düsseldorf (Higher Regional Court, Düsseldorf) submittes 7 questions for prelmiminary ruling to the CJEU (Art. 267 TFEU). Relevant for us: Relevant for us : Q. 5: Can collecting data from other group services and from third-party websites and apps via integrated interfaces such as “Facebook Business Tools”, or via cookies or similar storage technologies placed on the internet user’s computer or mobile device, linking those data with the user’s Facebook.com account and using them, or using data already collected and linked by other lawful means, also be justified under Article 6(1)(c), (d) and (e) of the GDPR in individual cases, for example to respond to a legitimate request for certain data 27 Garance Lamand M2 2023-2024 (point (c)), to combat harmful behaviour and promote security (point (d)), to research for social good and to promote safety, integrity and security (point (e)) (...) Q. 6: Can consent within the meaning of Article 6(1)(a) and Article 9(2)(a) of the GDPR be given effectively and, in accordance with Article 4(11) of the GDPR in particular, freely, to a dominant undertaking such as [Meta Platforms Ireland]? Decision of the CJEU : Pt 127: ‘In so far as that question refers to points (c) and (e) of the first subparagraph of Article 6(1) of the GDPR, it must be recalled that, under point (c), processing of personal data is lawful if it is necessary for compliance with a legal obligation to which the controller is subject. In addition, under point (e), processing that is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller is also lawful.’ (first condition : it should be lawful) Pt 128: ‘Article 6(3) of the GDPR specifies, inter alia, in respect of those two situations in which processing is lawful, that the processing must be based on EU law or on Member State law to which the controller is subject, and that that legal basis must meet an objective of public interest and be proportionate to the legitimate aim pursued.’ (second condition : should pursue an objective of public interest) Pt 132: ‘it will be for the referring court, inter alia, to inquire, for the purposes of applying point (c) of the first subparagraph of Article 6(1) of the GDPR, whether Meta Platforms Ireland is under a legal obligation to collect and store personal data in a preventive manner in order to be able to respond to any request from a national authority seeking to obtain certain data relating to its users.’ Pt 133: ‘it will be for that court to assess, in the light of point (e) of the first subparagraph of Article 6(1) of the GDPR, whether Meta Platforms Ireland was entrusted with a task carried out in the public interest or in the exercise of official authority, in particular with a view of carrying out research for the social good and to promote safety, integrity and security, bearing in mind that, given the type of activity and the essentially economic and commercial nature thereof, it seems unlikely that that private operator was entrusted such a task.. Pt 136: ‘(Art. 6(1)(d)) covers the specific situation in which the processing of personal data is necessary to protect an interest which is essential for the life of the data subject or that of another natural person. In that regard, the recital cites by way of example, inter alia, humanitarian purposes, such as monitoring epidemics and their spread, as well as situations of humanitarian emergencies, such as situations of natural and man-made disasters.’ (Example when it could be justified) Pt 137: ‘It follows from those examples and from the strict interpretation to be given to point (d) of the first subparagraph of Article 6(1) of the GDPR that, in view of the nature of the services provided by the operator of an online social network, such an operator, whose activity is essentially economic and commercial in nature, cannot rely on the protection of an interest which is essential for the life 28 Garance Lamand M2 2023-2024 of its users or of another person in order to justify, absolutely and in a purely abstract and preventive manner, the lawfulness of data processing such as that at issue in the main proceedings’. (Meta activities could not be justified because there are no valid grounds) Pt 142: ‘Article 4(11) of the GDPR, for its part, defines ‘consent’ as meaning ‘any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her’. Pt 143: ‘Recital 43 of that regulation states that, in order to ensure that consent is freely given, consent should not provide a valid legal ground for the processing of personal data where there is a clear imbalance between the data subject and the controller. That recital also clarifies that consent is presumed not to be freely given if it does not allow separate consent to be given to different personal data processing operations despite it being appropriate in the individual case.’ Pt 149: ‘the existence of such a dominant position may create a clear imbalance, within the meaning of recital 43 of the GDPR, between the data subject and the controller, that imbalance favouring, inter alia, the imposition of conditions that are not strictly necessary for the performance of the contract, which must be taken into account under Article 7(4) of that regulation. In that context, it must be borne in mind that, as stated in paragraphs 102 to 104 above, it does not appear, subject to verification by the referring court, that the processing at issue in the main proceedings is strictly necessary for the performance of the contract between Meta Platforms Ireland and the users of the social network Facebook.’ Pt 150: ‘those users must be free to refuse individually, in the context of the contractual process, to give their consent to particular data processing operations not necessary for the performance of the contract, without being obliged to refrain entirely from using the service offered by the online social network operator, which means that those users are to be offered, if necessary for an appropriate fee, an equivalent alternative not accompanied by such data processing operations.’ Pt 151: ‘The fact that those users cannot reasonably expect data other than those relating to their conduct within the social network to be processed by the operator of that network, it is appropriate, within the meaning of recital 43, to have the possibility of giving separate consent for the processing of the latter data, on the one hand, and the off-Facebook data, on the other. It is for the referring court to ascertain whether such a possibility exists, in the absence of which the consent of those users to the processing of the off-Facebook data must be presumed not to be freely given.’ Meta should no force use to sharing our personal data. The CJUE analyze when a company like Meta that is in a dominant position can defined the condition under which we give our consent. A dominant position may create an Imbalance ? Facebook tells you what you are agreed to or not. The consumer should have the opportunity to refuse. 29 Garance Lamand M2 2023-2024 2° Standards applied to data processing and transfer of data to third countries CJEU, 6 October 2015, Schrems, case C-362/14, EU:C:2015:650 (Schrems I) Background : Mr Schrems goes to the national data protection commission and made a complaint to the Commissioner by which he in essence asked the latter to exercise his statutory powers by prohibiting Facebook Ireland from transferring his personal data to the US. Request was rejected. Mr Schrems brought an action before the High Court challenging the decision at issue in the main proceedings. After considering the evidence adduced by the parties to the main proceedings, the High Court found that the electronic surveillance and interception of personal data transferred from the European Union to the United States did not serve necessary and indispensable objectives of public interest. This was illegal with regard to Irish law and possibly EU law (Decision 2000/50). Procedure : The High Court submitted 2 questions for preliminary ruling to the CJEU (Art. 267 TFEU). Questions : Q.1. Whether in the course of determining a complaint which has been made to an independent office holder who has been vested by statute with the functions of administering and enforcing data protection legislation that personal data is being transferred to another third country (in this case, the United States of America) the laws and practices of which, it is claimed, do not contain adequate protections for the data subject, that office holder is absolutely bound by the Community finding to the contrary contained in [Decision 2000/520] having regard to Article 7, Article 8 and Article 47 of [the Charter], the provisions of Article 25(6) of Directive [95/46] notwithstanding? Q.2. Or, alternatively, may and/or must the office holder conduct his or her own investigation of the matter in the light of factual developments in the meantime since that Commission decision was first published?’  What is the level of discretion of the decision of the Commission ? Decision of the CJUE : Pt 47: In accordance with Article 8(3) of the Charter and Article 28 of Directive 95/46, the national supervisory authorities are responsible for monitoring compliance with the EU rules concerning the protection of individuals with regard to the processing of personal data, each of them is therefore vested with the power to check whether a transfer of personal data from its own Member State to a third country complies with the requirements laid down by Directive 95/46. (Is the decision 2000/54 valid or not ?) Pt 51: The Commission may adopt, on the basis of Article 25(6) of Directive 95/46, a decision finding that a third country ensures an adequate level of protection. In accordance with the second subparagraph of that provision, such a decision is addressed to the Member States, who must take the measures necessary to comply with it. Pursuant to the fourth paragraph of Article 288 TFEU, it is binding on all the Member States to which it is addressed and is therefore binding on all their organs Pt 53: ‘However, a Commission decision adopted pursuant to Article 25(6) of Directive 95/46, such as Decision 2000/520, cannot prevent persons whose personal data has been or could be transferred to a third country from lodging with the national supervisory authorities a claim, within the meaning of Article 28(4) of that directive, concerning the protection of their rights and freedoms in regard to the 30 Garance Lamand M2 2023-2024 processing of that data. Likewise, as the Advocate General has observed in particular in points 61, 93 and 116 of his Opinion, a decision of that nature cannot eliminate or reduce the powers expressly accorded to the national supervisory authorities by Article 8(3) of the Charter and Article 28 of the directive.’ (MS can still double check) The ‘million dollar question’: is Decision 2000/50 valid? Answer: NO because... Pt 95: ‘legislation not providing for any possibility for an individual to pursue legal remedies in order to have access to personal data relating to him, or to obtain the rectification or erasure of such data, does not respect the essence of the fundamental right to effective judicial protection, as enshrined in Article 47 of the Charter. The first paragraph of Article 47 of the Charter requires everyone whose rights and freedoms guaranteed by the law of the European Union are violated to have the right to an effective remedy before a tribunal in compliance with the conditions laid down in that article. The very existence of effective judicial review designed to ensure compliance with provisions of EU law is inherent in the existence of the rule of law.’ (No procedural means for individual) Pt 102: ‘Decision 2000/520 (denies) the national supervisory authorities the powers which they derive from Article 28 of Directive 95/46, where a person, in bringing a claim under that provision, puts forward matters that may call into question whether a Commission decision that has found, on the basis of Article 25(6) of the directive, that a third country ensures an adequate level of protection is compatible with the protection of the privacy and of the fundamental rights and freedoms of individuals.’ Conclusion: Decision 2000/50 was declared invalid! CJEU, 16 July 2020, Schrems, C-311/18, EU:C:2019:1145 (Schrems II) Background : Following Schrems I Eur. Commission adopted new decision (SCC decision). In 2013, M. Schrems filed a new complaint requesting that Facebook Ireland be prohibited from transferring his personal data to the US because - he argued - the US did not provide adequate protection of personal data. Following the Schrems I ruling, M. Schrems referred the decision back to the (national data protection) Commissioner. In the course of the Commissioner’s investigation, Facebook Ireland explained that a large part of the personal data transferred to Facebook Inc. meets the standard data protection clauses set out in the annex of the SCC Decision. The Commissioner published a ‘draft decision’ summarizing the provisional findings of their investigation. She took the provisional view that the personal data of EU citizens transferred to the US were likely to be consulted and processed by US authorities in a manner incompatible with Articles 7 and 8 Charter, without necessary remedies within the meaning of Article 47 Charter. The Commissioner found that the standard data protection clauses in annex of the SCC Decision are not capable of remedying that defect, since they confer only contractual rights against the data exporter and importer, without binding the US authorities. Procedure : High Court submitted 11 questions for preliminary ruling to the CJEU (art. 267 TFEU). 31 Garance Lamand M2 2023-2024 Relevant for us: Q. 7: Does the fact that the standard contractual clauses apply as between the data exporter and the data importer and do not bind the national authorities of a third country who may require the data importer to make available to its security services for further processing the personal data transferred pursuant to the clauses provided for in [the SCC Decision] preclude the clauses from adducing adequate safeguards as envisaged by Article 26(2) of [Directive 95/46]? (...) Q. 11: Does the [SCC Decision] violate Articles 7, 8 and/or 47 of the Charter? Decision of the CJUE : Issue of safeguards? Pt 127: ‘the question arises whether a Commission decision concerning standard data protection clauses, adopted pursuant to Article 46(2)(c) of the GDPR, is invalid in the absence, in that decision, of guarantees which can be enforced against the public authorities of the third countries to which personal data is or could be transferred pursuant to those clauses.’ (...) Pt 133: ‘the standard data protection clauses adopted by the Commission on the basis of Article 46(2)(c) of the GDPR are solely intended to provide contractual guarantees that apply uniformly in all third countries to controllers and processors established in the European Union and, consequently, independently of the level of protection guaranteed in each third country. In so far as those standard data protection clauses cannot, having regard to their very nature, provide guarantees beyond a contractual obligation to ensure compliance with the level of protection required under EU law, they may require, depending on the prevailing position in a particular third country, the adoption of supplementary measures by the controller in order to ensure compliance with that level of protection.’ Guarantees contained in the SCC Decision Pt 134 (responsability of the controller): ‘it is therefore, above all, for that controller or processor to verify, on a case-by-case basis and, where appropriate, in collaboration with the recipient of the data, whether the law of the third country of destination ensures adequate protection, under EU law, of personal data transferred pursuant to standard data protection clauses, by providing, where necessary, additional safeguards to those offered by those clauses.’ Cf. pts: 138-139 – controllers under obligation to comply with the applicable EU legislation (which includes the Charter and the GDRP) 140 – suspension if non-compliance with the EU standard of protection Pt 142: ’It follows that a controller established in the European Union and the recipient of personal data are required to verify, prior to any transfer, whether the level of protection required by EU law is respected in the third country concerned. The recipient is, where appropriate, under an obligation, under Clause 5(b), to inform the controller of any inability to comply with those clauses, the latter then being, in turn, obliged to suspend the transfer of data and/or to terminate the contract.’ Pt 147: ‘(...) in order to avoid divergent decisions, Article 64(2) of the GDPR provides for the possibility for a supervisory authority which considers that transfers of data to a third country must, in general, 32 Garance Lamand M2 2023-2024 be prohibited, to refer the matter to the European Data Protection Board (EDPB) for an opinion, which may, under Article 65(1)(c) of the GDPR, adopt a binding decision, in particular where a supervisory authority does not follow the opinion issued.’ It follows that... Pt 148: ‘(...) the SCC Decision provides for effective mechanisms which, in practice, ensure that the transfer to a third country of personal data pursuant to the standard data protection clauses in the annex to that decision is suspended or prohibited where the recipient of the transfer does not comply with those clauses or is unable to comply with them.’ Pt 149: ‘(...) examination of the SCC Decision in the light of Articles 7, 8 and 47 of the Charter has disclosed nothing to affect the validity of that decision.’ CJEU, 26 July 2017, Opinion 1/15 (Draft Agreement, transfer of Passenger Name Record Data from the EU to Canada), EU:C:2017:592 Background : Draft agreement between EU and Canada on data collected from passengers for the purpose of reserving flights between Canada end the EU, to be referred to Canadian authorities and used to prevent or detect terrorist offences and/or serious transnational criminal offences, while providing a number of guarantees in relation to privacy and the protection of passengers’ personal data. There was naturally a question on the compatibility of the agreement with primary EU law i.e. Article 16 TEU and Articles 7, 8 and 52(1) Charter. Procedure : Article 218(11) TFEU – A Member State, the European Parliament, the Council or the Commission may obtain the opinion of the Court of Justice as to whether an agreement (between EU and third countries or IO) envisaged is compatible with the Treaties. Where the opinion of the Court is adverse, the agreement envisaged may not enter into force unless it is amended or the Treaties are revised. CJEU – PNR Agreement can be compatible with EU law if: 1. PNR data transferred is clear and precise, 2. Specific, reliable and non-discriminatory criteria for automated processing, 3. Collected data should be used by Competent Canadian Authority 4. Limit de retention of the Passenger Name Record data 5. Disclosure of Passenger Name Record data by Canadian authorities should be based on an agreement b/ EU and third country or a decision by the European Commission on the level of fundamental rights protection in that country, 6. Right to individual notification for air passengers in the event of use of Passenger Name Record data 7. Oversight of the compliance with the rules in the PNR agreement carried out by an independent authority. The reference for this is the GDPR ! 33 Garance Lamand M2 2023-2024 Exercise of rights by data subjects The right to rectification / to be forgotten CJEU, 13 May 2014, Google Spain, case C-131/12, EU:C:2014:317 Background Background : Spanish national resident in Spain, lodged with the AEPD a complaint against a daily newspaper with a large circulation, in particular in Catalonia (Spain) (‘La Vanguardia’), and against Google Spain and Google Inc. The complaint was based on the fact that, when an Internet user entered the plaintiff’s name in the search engine of the Google group (‘Google Search’), he would obtain links to two pages of La Vanguardia’s newspaper on which an announcement mentioning the name appeared for a real-estate auction connected with attachment proceedings for the recovery of social security debts. The plaintiff asked for that information to be removed (the right to erasure). Procedure : The Audiencia Nacional referred 3 questions for preliminary ruling to the CJEU Relevant for us: Q. 3. must it be considered that the rights to erasure and blocking of data, provided for in Article 12(b), and the right to object, provided for by [subparagraph (a) of the first paragraph of Article 14] of Directive 95/46, extend to enabling the data subject to address himself to search engines in order to prevent indexing of the information relating to him personally, published on third parties’ web pages, invoking his wish that such information should not be known to internet users when he considers that it might be prejudicial to him or he wishes it to be consigned to oblivion, even though the information in question has been lawfully published by third parties?’ Decision of the CJUE : Pt 92: ‘As regards Article 12(b) of Directive 95/46, the application of which is subject to the condition that the processing of personal data be incompatible with the directive, it should be recalled that (...), such incompatibility may result not only from the fact that such data are inaccurate but, in particular, also from the fact that they are inadequate, irrelevant or excessive in relation to the purposes of the processing, that they are not kept up to date, or that they are kept for longer than is necessary unless they are required to be kept for historical, statistical or scientific purposes.’ Pt 94: ‘Therefore, if it is found, following a request by the data subject pursuant to Article 12(b) of Directive 95/46, that the inclusion in the list of results displayed following a search made on the basis of his name of the links to web pages published lawfully by third parties and containing true information relating to him personally is, at this point in time, incompatible with Article 6(1)(c) to (e) of

Use Quizgecko on...
Browser
Browser