skemaAI.docx
Document Details

Uploaded by RemarkableAlpenhorn
SKEMA Business School
Full Transcript
Table des matières Weak, strong and "superintelligent" AI 1 The main current technologies 1 ChatGPT - A major technical advance : 3 The quantum computer 3 Part I: Ethical and legal framework for AI 4 The history of AI ethics: from the beginnings to the present day 5 1. The Precursors - Turing and th...
Table des matières Weak, strong and "superintelligent" AI 1 The main current technologies 1 ChatGPT - A major technical advance : 3 The quantum computer 3 Part I: Ethical and legal framework for AI 4 The history of AI ethics: from the beginnings to the present day 5 1. The Precursors - Turing and the Universal Machine : 5 2. The 1960s and 1970s - The birth of AI and early concerns : 6 3. The 1980s - Reflections on the ethics of robots : 6 4. The 2000s - The emergence of algorithmic biases : 7 5. The contemporary era - Ethics to the fore: 8 Successful and controversial AI applications 9 AI ethical codes: a smokescreen? 9 European AI legislation: an overview 11 1. Charter of Fundamental Rights of the European Union 11 2. The GDPR and the regulation of AI 11 3. New AI Act 12 Criticism of the European AI regulation 15 AI regulations in the USA 17 Different regulatory approaches in Europe, the United States and China 20 Generative AI and copyright 21 The Ethical-by-Design approach 23 An approach inspired by the GDPR 23 Can ethics be programmed into an AI system? 24 Today's AI challenges 25 The challenge of explicability 25 The (worrying?) case of OpenAi 26 Liability for AI errors 27 The spectre of killer drones 28 The risk of anti-competitive practices 30 The risk of generalized surveillance 31 Algorithmic biases 32 The challenge of cybersecurity 32 The employment challenge 33 Part II: AI consciousness, myth or reality? 35 What do we (really) know about consciousness? 35 Materialist currents of consciousness 36 Computationalism 36 The difficult problem of consciousness 37 Qualia 38 Intentionality and free will 40 An unanswered question? 41 Non-local consciouness 42 Non-locality in physics 42 Non-locality in biology and neuroscience 45 Noetics 45 Quantum AI and consciousness? 48 How to test the consciousness of an AI? 48 The potential consequences of conscious AI 52 AI as an "entity": the debate on legal personality 53 What "rights" for conscious AI? 55 Weak, strong and "superintelligent" AI The field of AI is constantly evolving, and is characterized by a diversity of sub-fields and techniques. The first categorization, which has become classic, distinguishes weak AI from strong and "superintelligent" AI. Weak (or narrow) AI: This form of AI, specialized in specific tasks, is already integrated into our daily lives. Voice assistants like Siri or Alexa and recommendation systems like those used by Netflix are perfect examples. They show how AI can simplify and personalize our experience with technology. Weak AI is used in a multitude of other applications, such as facial recognition and object identification, machine translation, autonomous driving or medical diagnostics. General (or strong) AI: Although still theoretical, general AI is envisaged as having the capacity to perform any intellectual task that a human being can accomplish. This includes learning and problem-solving in a variety of contexts. General AI is still a distant goal, but it has the potential to revolutionize our lives even more profoundly than weak AI. It could be used to create robots capable of independent thought and action, or to develop systems capable of understanding and responding to the complexity of the real world. Superintelligent AI: Surpassing human intelligence, this form of AI remains speculative. It is imagined to surpass humans not only in analysis, but also in creativity and empathy. Superintelligent AI is still a distant notion, but it raises important questions about the nature of intelligence. If superintelligent AI were created, it could have a profound impact on our world, for better or for worse. Superintelligent AI goes hand in hand with the concept of the "singularity" (dear to science fiction), which refers to a theoretical point in the future where machine intelligence would far surpass human intelligence, leading to unpredictable and fundamental changes in human society, even its destruction according to some1. The main current technologies Today's AIs are often based on technologies known as machine learning, deep learning or natural language processing. Other techniques are commonly used, such as expert systems or Bayesian networks2 , and research is constantly developing. Machine learning: With algorithms that learn from data, machine learning is a cornerstone of modern AI. It includes methods such as supervised, unsupervised and reinforcement learning. Deep learning is a sub-category of machine learning that involves artificial neural networks with many layers. These networks are capable of learning high levels of data representation, enabling them to identify complex patterns and perform tasks such as image recognition, natural language understanding, and autonomous vehicle driving. Deep learning relies on large quantities of data and considerable computing power to progressively optimize the weights of neural connections in order to improve performance on specific tasks. Natural Language Processing (NLP): NLP is a branch of artificial intelligence that focuses on the interaction between computers and human language. The aim of NLP is to enable machines to understand and react to text or speech using algorithms that learn language models. NLP applications include machine translation, speech recognition, text generation, chatbots (such as ChatGPT, see below), and sentiment analysis. NLP development relies on deep learning techniques to process and analyze large quantities of textual data. Sidebar Supervised learning is a predictive method for training artificial intelligence (AI) models. This technique requires a data set already labeled with the correct results to guide the model. For example, in image recognition, supervised learning uses clearly labeled images to teach the AI how to identify and classify new images autonomously. Unsupervised learning, on the other hand, does not rely on labeled data and is used to detect hidden patterns or structures within a dataset. This approach is particularly useful for data grouping or clustering, where AI needs to organize information into relevant categories without prior instruction. A frequent use case is the grouping of consumers with similar behaviors for market analyses. Reinforcement learning is a learning strategy in which AI interacts with an environment to perform certain actions that yield maximum reward. Unlike supervised learning, where the emphasis is on accurate prediction from input data, reinforcement learning focuses on decision-making and optimizing actions through trial and error. A classic example is learning a video game like Pac-Man, where the AI must navigate a maze by eating pellets while dodging ghosts. For each pastille eaten, the AI receives a reward, while interaction with the ghosts results in a penalty. The aim is to develop an optimal strategy to maximize the score. Training an AI to master Pac-Man involves understanding and anticipating the game's patterns to iteratively improve its performance. End Boxed ChatGPT - A major technical advance : ChatGPT, the AI developed by OpenAI3 , conquered the planet in less than a year, and is now used daily by hundreds of millions of people. The chatbot has become an issue of global domination, and all the tech giants, from Silicon Valley to Beijing, have since thrown themselves fervently into the race for so-called "generative" AI. Why has ChatGPT been so successful and why is it such a major innovation? ChatGPT is based on the latest generation of natural language processing (NLP) models. It incorporates sophisticated algorithms that enable it to understand and generate human language with remarkable accuracy and fluency. This ability to interact in natural language makes ChatGPT particularly powerful for a variety of applications, from customer support to education. ChatGPT's versatility is another aspect of its technical advance. It can be used for a variety of tasks, such as generating creative text, answering questions, or writing summaries. This flexibility makes it applicable in a wide range of fields, paving the way for innovative uses. ChatGPT also stands out for its ability to conduct interactive conversations and understand the context of a discussion. This contextual understanding enables it to provide more relevant and personalized responses, enhancing the user experience. Bottom of form The quantum computer The quantum computer represents a technological revolution, promising to dramatically increase the power and efficiency of computer processing. It has become an issue of world domination, and the major powers, allied to their technological champions, are investing considerable sums in its development. Applied to artificial intelligence, the quantum computer has the potential to transform AI's ability to solve complex problems at a speed and with an efficiency both unmatched by today's supercomputers4. Quantum computers use qubits that can exist simultaneously in several states (unlike conventional bits, which are either 0 or 1), enabling a multitude of possibilities to be processed in parallel. This property, known as superposition, combined with quantum entanglement, which enables instant interaction between qubits regardless of distance, could exponentially accelerate AI algorithms and their ability to recognize patterns in large datasets, significantly improving areas such as image recognition, natural language processing and data classification. In the field of drug discovery, for example, quantum computing could accelerate the process of simulating molecular interactions and predicting molecular properties, making pharmaceutical research much more efficient and less dependent on trial-and-error methods. However, there are significant challenges to overcome before quantum computing is widely adopted for AI. Building and maintaining stable quantum hardware is difficult due to the delicate nature of qubits, which are sensitive to any form of perturbation, a phenomenon known as decoherence. Moreover, quantum systems are prone to errors, requiring robust error correction techniques. Hybrid approaches, combining classical and quantum algorithms, are likely to play a key role in practical applications of quantum AI. The quantum computing market is growing, with forecasts indicating that it could reach $2.2 billion by 2026, and the number of operational quantum computers could reach around 180. Cloud-based access to quantum computing resources, known as Quantum Computing as a Service (QCaaS), is likely to dominate as a revenue source for quantum computing companies, accounting for 75% of all quantum computing revenues in 2026. Collaborative efforts and future prospects for quantum AI are essential to the success of this convergence between quantum computing and artificial intelligence. Research, development and investment in quantum AI is increasing rapidly, with contributions from academic institutions, tech giants and startups, paving the way for a future where quantum computers and AI algorithms will work together to revolutionize industries and reshape our understanding of computation and intelligence5. Top of form Part I: Ethical and legal framework for AI Bottom of form Ethics is a branch of philosophy concerned with morality, values, human actions and behavior. It aims to determine what is good or bad, just or unjust, virtuous or vicious, by providing principles and criteria to evaluate and guide individual and collective behavior. Ethics differs from morality in that morality refers to a particular set of beliefs, customs, mores or principles accepted by a group or society, while ethics is the critical study of these principles to determine their validity and applicability. Ethics are also distinct from regulation. They are distinct concepts, but often interdependent when it comes to guiding and controlling behavior within a specific society or field. Ethical principles can be interpreted and adapted to suit individual circumstances or perspectives. Although some ethical standards may be widely accepted, their implementation is often based on personal choice, conscience, or social norms rather than formal mechanisms. Regulation refers to rules, guidelines or laws established by a competent authority or body. It is usually the result of government decisions, regulatory bodies or legislatures. Regulations are often precise, clearly stipulating what is permitted or prohibited. Regulations have formal enforcement mechanisms, with clearly defined consequences for non-compliance (such as fines, penalties or imprisonment). Regulations are generally aimed at maintaining order, protecting the rights of individuals or groups, ensuring safety, quality or fairness, or promoting certain economic, social or political interests. It is important to note that, in many cases, regulation is influenced by ethical considerations. When a society recognizes a particular ethical concern, it may respond by putting regulations in place to codify and enforce certain behaviors. Similarly, ethical debates may emerge in response to regulations perceived as unfair or inadequate. Top of form Bottom of form The history of AI ethics: from the beginnings to the present day From the very first sketches of what artificial intelligence would be, ethical questions began to emerge. 1. The Precursors - Turing and the Universal Machine : In his famous 1950 article, "Computing Machinery and Intelligence", Alan Turing (British mathematician, cryptanalyst and theorist) posed the provocative question, "Can machines think?" Instead of answering directly, Turing proposed a test, now known as the "Turing Test", as a criterion for assessing whether or not a machine "thinks". If a machine could imitate a human being to such an extent that a judge could not distinguish its answers from those of a human, then, Turing suggested, we should accept that this machine "thinks". Turing's impact on computing and AI is colossal. The universal machine prefigured the era of digital computing, and his thoughts on machine thinking began debates that continue to this day. Although technologies have evolved far beyond his early conceptualizations, Turing's legacy remains fundamental for anyone embarking on the study of computing or AI. We will see, however, that Turing's test has its limits, even becomes double-edged, and that other tests are needed today to characterize "thinking" AI. 2. The 1960s and 1970s - The birth of AI and early concerns : As AI became a tangible reality, researchers began to see its potential implications. Joseph Weizenbaum6 , with his ELIZA program in 1966, highlighted the risks of letting people attribute feeling or consciousness to a mere machine. The dilemma was born: how to differentiate real intelligence from imitation? ELIZA is one of the first artificial intelligence programs designed to simulate a conversation. Although its design is relatively simple by modern standards, it was able to mimic, to some degree, the conversation of a Rogerian psychotherapist, relying mainly on the reformulation of patient statements. ELIZA used so-called "scripts" to guide the conversation. The most famous script, DOCTOR, simulates a therapist by asking open-ended questions or rephrasing the user's statements. For example, if a user said: "I'm sad", ELIZA might reply: "I'm sorry to hear you say you're sad". These responses were generated using simple rules, but were convincing enough to engage the user in an interaction. The ELIZA experience was surprising on several levels. Many people interacting with the program began to attribute real emotions and thoughts to it, even after being informed that it was simply a program following scripts. Weizenbaum himself was astonished and, to some extent, alarmed by the ease with which people formed an "emotional" relationship with a machine. Inspired by the reactions to ELIZA, Weizenbaum became one of the first and most prominent critics of artificial intelligence. He pointed out the potential dangers of inappropriately attributing emotions and intentions to machines. His 1976 book, "Computer Power and Human Reason"7 , expresses his conviction that, although machines can be programmed to imitate human thinking, they should never be allowed to make decisions that concern matters of human importance. Joseph Weizenbaum's creation of ELIZA not only marked a milestone in the development of human-computer interaction technologies, it also opened up an ethical debate on how human beings perceive and interact with machines. His observations on people's emotional attachment to ELIZA continue to resonate, more than ever, in today's world of virtual assistants and chatbots. Top of form Bottom of form 3. The 1980s - Reflections on the ethics of robots : The 1980s saw a significant expansion in the development and marketing of robots, mainly in industrial fields. With this growth came a growing awareness of the ethical implications of robotics. Robots have begun to make their way into factories, replacing manual, repetitive tasks traditionally carried out by humans. The increasing mechanization of factories has raised concerns about worker safety, job loss and the alienation of man from machine. This decade saw the popularization of debates around Isaac Asimov's "Three Laws of Robotics"8. Sidebar The 3 laws of robotics by Isaac Asimov " 1. A robot may not harm a human being, nor, remaining passive, allow a human being to be exposed to danger. 2. A robot must obey orders given by human beings, unless such orders conflict with the First Law. 3. A robot must protect its existence as long as such protection does not conflict with the First or Second Law." Framed end With the potential for robots to cause damage or accidents, the question of who is liable in the event of fault became central. In the late 1980s, the idea that robots could one day operate with significant autonomy began to gain ground, raising new concerns. Bottom of form 4. The 2000s - The emergence of algorithmic biases : The rise of big data and machine learning algorithms has revealed a new ethical challenge: bias. Studies have shown that algorithms, although designed to be neutral, can in fact perpetuate, or even amplify, existing biases, raising crucial questions about justice, fairness and human rights. Algorithms began to show sometimes discriminatory results, often because they were trained on biased data9. Problems were identified in facial recognition systems, particularly with regard to racial and gender discrimination10. Concerns have been raised about the use of algorithms for decision-making in the justice system, particularly in relation to bail determinations and the likelihood of recidivism11. Faced with these issues, many researchers and activists have called for ethical consideration in the development and deployment of AI12. It has become essential to teach algorithm designers about the potential dangers of bias and methods for minimizing it13. Sidebar A well-known example of bias in algorithms is that discovered in the facial recognition system of American technology company IBM. A study published in 2018 showed that facial recognition software had significantly higher error rates for dark-skinned women compared to light-skinned men. The study, entitled "Gender Shades", highlighted the need to diversify the datasets used to train facial recognition algorithms and other AI technologies, and thus avoid perpetuating and amplifying existing prejudices. Framed end 5. The contemporary era - Ethics to the fore: In today's era of ubiquitous artificial intelligence (AI), ethics is a matter of renewed urgency. Various international, academic and industrial institutions are working to develop guiding principles for the responsible development and deployment of AI. These efforts aim to balance technological innovation with ethical considerations, ensuring that the benefits of AI are realized without compromising our fundamental values. In 2019, the European Commission took a decisive step by issuing a set of directives to ensure that AI is developed and used ethically in Europe. These directives served as a prelude to more comprehensive AI regulation, about to be adopted at the time of writing. This European regulation on AI represents a crucial milestone, reflecting a desire to structure technological innovation around essential human values, such as privacy, fairness and transparency. The history of AI ethics illustrates a progressive and growing awareness. From early philosophical questions about the nature of thought and consciousness, we are now faced with concrete dilemmas concerning safety, fairness and justice in the AI era. The importance of thorough and rigorous ethical reflection in this area is more crucial than ever, as AI becomes integrated into every aspect of our daily lives and redefines our social, economic and personal interactions. This context underlines the urgent need for a robust and adaptive ethical framework, capable of guiding the development of AI while safeguarding the rights and well-being of individuals. Thus, AI ethics not only responds to current challenges, but also strives to anticipate and shape a future where technology and humanity coexist in harmony, guided by principles of responsibility, respect and integrity. Successful and controversial AI applications Europe has been the scene of many AI innovations, some hailed for their effectiveness and positive impact, while others have raised concerns and ethical debates. AI ethical codes: a smokescreen? In recent years, a number of organizations in both the public and private sectors have recognized the importance of the ethical issues associated with artificial intelligence, and have adopted codes of conduct in this respect. The Asilomar Conference on Beneficial AI14 is a conference organized by the Future of Life Institute15 , held January 5-8, 2017 in California. The Future of Life Institute is an association based in the Boston area, which aims to diminish the existential risks threatening humanity, particularly those arising from artificial intelligence (AI). Its founders and advisors include astrophysicists such as the late Stephen Hawking and entrepreneur Elon Musk. Over 100 influential figures and researchers in economics, law, ethics and philosophy came together to explore and formulate the principles of beneficial artificial intelligence (AI). These are the "23 Asilomar Principles on AI", which have now become a benchmark. This approach was complemented by the "Montreal Declaration for the Responsible Development of AI" in December 201816. The declaration sets out ten principles to be respected: well-being, respect for autonomy, protection of intimacy and privacy, solidarity, democratic participation, equity, inclusion of diversity, prudence, responsibility and sustainable development. This declaration subsequently influenced reports by the OECD17 and UNESCO18. The recommendations and definitions of these two international organizations are of some use today, as they provide guidance in the development of legal frameworks for AI. Europe, in particular, has taken inspiration from them, as part of its draft regulations, which we'll be talking about shortly. More recently, on November 2, 2023, the first global summit on artificial intelligence was held in London19. Over the course of two days, some one hundred hand-picked experts, business leaders and political leaders20 discussed the dangers posed by the exponential progress of AI. The aim was to agree on joint action to prevent the technology from going off course. Remarkably, all three major world powers - China, the United States and the European Union - were represented, and agreed on a "shared responsibility" to address the risks of AI. The "Bletchley Declaration" links governments and companies to "work together on the safety of new AI models before they are launched". The London summit drew strong criticism from civil society, focusing on its scope and representativeness21. According to these dissonant voices, the event focused too much on hypothetical long-term risks of AI (such as the "singularity"), neglecting current, concrete problems such as algorithmic biases. The "Bletchley Declaration" was seen as a first step towards international cooperation, but criticized for its lack of detail on concrete actions to be taken. In addition, the summit was criticized for not including a wider range of participants, including those from communities affected by AI. The concrete results of the summit seem limited to the commitment to organize further summits in the future. Major technology companies, such as Google22 and Microsoft23 , have also embarked on self-regulation initiatives by adopting various codes of conduct. These efforts illustrate a growing tendency to recognize the ethical challenges of AI, but they often represent more aspirations than binding constraints. Effective implementation and monitoring of these ethical principles remain important challenges. According to an academic study24 , the inconsistent application of these voluntary principles highlights the heterogeneous nature of AI ethics. While some principles such as transparency seem widely accepted, others, such as human dignity and sustainability, are less frequently addressed, signaling a need to tackle the environmental impact of AI technologies. Solidarity is often invoked in relation to the social impacts of AI, such as the loss of jobs due to automation. To integrate ethics right from the AI design stage, proactive approaches and technical initiatives to reduce algorithmic biases are suggested by researchers. AI ethics also faces the problem of "ethics washing": ethics can be used as a selling point rather than a concrete practice. Companies' ethical declarations raise questions about their real commitment behind the proclaimed values. Public figures such as Elon Musk have been criticized for the way they present AI issues, which, for some, mix science fiction and scientific reality, touching on concepts such as transhumanism and technological singularity. The real question is whether the adoption of ethics in AI is motivated by a sincere commitment, or is simply a strategy to calm controversy and avoid more rigorous regulation. European AI legislation: an overview 1. Charter of Fundamental Rights of the European Union Although not specific to AI, this charter is often cited as a benchmark for rights and freedoms that could be affected by AI technologies25. 2. The GDPR and the regulation of AI The General Data Protection Regulation (GDPR), which was adopted in 2016 and came into force in 2018, is a founding text of European legislation on the protection of personal data. Although it does not exclusively concern AI, the GDPR has important implications for AI systems that process personal data. AI must process data in compliance with the purposes for which it was collected, while limiting the amount of data processed26. AI systems must inform individuals clearly when processing their data, and users should have the right to challenge certain automated decisions27. AI must guarantee the security of processed data, including protection against unauthorized or unlawful processing28. The GDPR highlights the right of individuals not to be subject to a decision based solely on automated processing, including profiling, which would produce legal effects in relation to them29. This last provision is probably the most important, since it establishes a genuine right to oppose an algorithmic society, which would govern our lives without the possibility of involving human beings in decision-making processes. We should note, as the CNIL does30 , that in AI-based systems, the sources of error are varied due to their complexity. 3. New AI Act In 2021 the Commission presented a proposal for a regulation establishing harmonized rules on artificial intelligence31. On December 8, 2023, after fierce debate, the European Parliament and the Council of the European Union reached agreement on this text, which will be the world's first law on artificial intelligence. The proposed regulatory framework has the following objectives: ensure that AI systems placed on the market are safe and respect current legislation on fundamental rights, EU values and the rule of law, as well as environmental sustainability; guarantee legal certainty to facilitate investment and innovation in AI ; strengthen governance and effective enforcement of existing legislation on AI system safety requirements and fundamental rights; facilitate the development of a single market for legal and safe AI applications, and prevent market fragmentation. More specifically, the future regulation establishes : the prohibition of certain practices; specific requirements for high-risk AI systems; harmonized transparency rules applicable : AI systems designed to interact with people; emotion recognition and biometric categorization systems; generative AI systems used to generate or manipulate images, audio or video content. Sidebar Fundamental generative models - such as ChatGPT - will have to comply with additional transparency requirements. This includes disclosing the fact that content has been generated by an AI, designing the model to prevent the generation of illegal content, and publishing summaries of copyrighted data used for training. Framed end This approach must take into account the beneficial social and environmental outcomes that AI can bring, but also the new risks or negative consequences that this technology can generate. Consistency is ensured with the EU Charter of Fundamental Rights32 , but also with secondary legislation on data protection (GDPR), consumer protection, non-discrimination and gender equality. The Regulation complements existing non-discrimination law with requirements that aim to minimize the risk of algorithmic discrimination, accompanied by obligations regarding testing, risk management, documentation and human control throughout the lifecycle of AI systems. The European Union's AI Act has passed its last significant hurdle towards adoption, as confirmed by the Committee of Permanent Representatives (COREPER) on February 2, 2024. After political agreement was reached in December 2023, the finalized text of the draft law received unanimous support from the ambassadors of all 27 EU member states. The passing of the AI Act by COREPER is a significant step forward, and it is now expected to be formally adopted as law in the coming months. The European Parliament is set to have a final vote on the compromise text, but given the level of consensus, this is expected to be a formality. Once adopted, the EU will move into the implementation stage, which will likely continue to see lobbying and discussions on the practicalities of the new regulation33. The future AI regulation prohibits the following artificial intelligence practices: AI system that uses subliminal techniques below the threshold of a person's consciousness to substantially alter their behavior in such a way as to cause physical or psychological harm (manipulation of human behavior to circumvent free will); AI system exploiting possible vulnerabilities due to age or disability to substantially alter its behavior and so as to cause physical or psychological harm ; AI systems designed to assess or rank the reliability of people based on their social behavior or personal characteristics and which may result in the prejudicial treatment of people, in certain contexts, unjustified or disproportionate. The agreement reached between the Parliament and the Member States specifies the ban on biometric categorization systems using sensitive characteristics (political, religious or philosophical opinions, sexual orientation, etc.) and social rating based on social behavior or personal characteristics; recognizing emotions in the workplace and educational institutions ; real-time" remote biometric identification systems in publicly accessible areas for law enforcement purposes, except in the following cases: targeted search for specific potential victims of crime (missing children, trafficking, sexual exploitation); prevention of a specific, substantial and imminent threat to the life or security of persons or the prevention of a terrorist attack; identification, location or prosecution of perpetrators or suspects of certain criminal offences punishable by a maximum sentence of at least three years. The use of biometric identification systems must : take into account the situation giving rise to the use of the system and the seriousness or extent of the harm in the absence of its use ; take into account the consequences for the rights and freedoms of all persons concerned (seriousness, probability, extent); be subject to prior authorization by a competent judicial or administrative authority. Annex III of the regulation lists high-risk AI systems. MEPs included a mandatory fundamental rights impact assessment in the regulation, which also applies to the banking and insurance sectors. AI systems used to influence election results and voter behavior are classified as high-risk. Citizens will have the right to : file complaints about AI systems; receive explanations about decisions based on high-risk AI systems that affect their rights. Failure to comply with the rules could result in fines ranging from 7.5 million euros, or 1.5% of sales, to 35 million euros, or 7% of worldwide sales, depending on the size of the company and the offence. The AI Act will enter into force 20 days after its publication in the EU’s Official Journal and will begin to apply 24 months after coming into force, with certain provisions being phased in at different times. The phased implementation allows for a six-month grace period before prohibitions on unacceptable risk AI systems begin to apply, likely around fall 2024, and a year before applying rules on foundational models, which would be around 2025. The rest of the rules are expected to be applied two years after the law's publication, which is anticipated for spring 2024. Criticism of the European AI regulation The adoption of the European "AI law" was not without its problems. In the aftermath of the political agreement on the regulation reached in December 2023, President Emmanuel Macron was already tackling the text in these terms: "We will be the first place in the world where on so-called AI foundation models, we will regulate much more than others."34. These tensions have become classic, between the desire to regulate to protect citizens' rights, and the fear that these regulations will hamper innovation and Europe's competitiveness. The criticism that Europe is focusing more on regulation than on promoting its industry can be paralleled by the debate that has raged around the GDPR. Although widely hailed for strengthening privacy and data security, the GDPR has also been widely criticized by industry for its heavy regulatory burden and potentially inhibiting impact on digital innovation. The criticisms raised of European AI regulation reflect similar concerns. On the one hand, there is a desire to protect citizens' rights and adhere to European values, including the protection of fundamental rights and the prevention of potential abuses of AI. On the other, there is a fear that these same regulations will hold back innovation and competitiveness, notably by imposing administrative and financial burdens that could be particularly onerous for startups and innovative small businesses. This could, according to some, drive companies to operate outside Europe, or discourage investment in emerging technologies. This tension reflects a wider debate on how Europe can balance consumer protection and regulatory requirements with the need to promote an environment conducive to innovation and economic growth. The aim is to find a happy medium where regulation does not become a burden, but rather a framework facilitating responsible innovation35. In an op-ed published in "Les Echos", business leaders, scientists and AI experts highlight the problems faced by European technology companies in the face of piling up European legislation. These, in rapid response to technological advances, become obsolete almost as quickly as they are enacted, harming the industry. The authors point out that the "AI Act", initiated in 2021, only superficially considers recent advances such as the ChatGPT phenomenon, and could be implemented in 2025, in an already transformed technological landscape. They argue that current regulations impose constraints that are ill-suited to today's generative AI technologies, and advocate a regulatory framework that would formulate general principles, in a risk-level oriented approach Bottom of form 36. The future will tell whether European regulations will essentially be a millstone around the European industry's neck, or, on the contrary, in a more positive scenario, an international standard that raises the level of protection on a European and global scale. Again, it's interesting to draw a parallel with the GDPR, which has undoubtedly had a considerable effect on the European economy and businesses, with a marked international reach. Its impact is, however, mixed: it has strengthened user privacy and unified privacy policies across member states, boosting consumer confidence. But it has also generated significant compliance costs, particularly for small and medium-sized businesses, which could deter their participation in the digital market and potentially consolidate the market share of larger firms37. On the EU's digital economic front, the GDPR has been associated with a decrease in investment in startups and a reduction in the number of apps available on platforms such as Google Play, just after its implementation. These effects highlight the challenges faced by businesses, particularly smaller ones, in complying with the GDPR's stringent requirements, which can divert resources from core operations, curb innovation and reduce new market entries. Internationally, the GDPR has inspired similar data protection laws in various US states and prompted global companies to align their practices with its principles. It has catalyzed the reform of privacy laws worldwide, influencing countries such as Japan, Argentina, Switzerland, Israel and New Zealand to align their rules with those of the GDPR or to be recognized as offering an adequate level of protection, thus facilitating international data transfers38. Despite these advances, the application of the GDPR in Europe has sometimes been deemed insufficient, with some user advocacy groups calling it "extremely lax". However, the existence of the GDPR has encouraged greater awareness and progress towards stricter personal data protection rules on a global scale. In terms of data security, the results are mixed. There may be fewer cookies overall on the web, but the number of data breaches has apparently increased, indicating that while the GDPR may have reduced cookie use, it's not clear that it has succeeded in improving user privacy. Today, some are asking whether technology might not be a better safeguard to... technology, in this case by resorting to blockchain and artificial intelligence, which could offer more effective solutions to Internet privacy issues than rigid mandates like the GDPR.39 Similar voices are being heard with regard to AI regulation, which would be better controlled by... AI than by law40. AI regulations in the USA State-Level AI Regulations: States like Illinois and New York are spearheading regulations at the state level. Illinois' Artificial Intelligence Video Interview Act41, which went into effect in January 2020, mandates employers to notify job applicants when AI is used in video interviews and to explain how the AI works. New York City has proposed a bill that would regulate automated employment decision tools to prevent bias in hiring. Federal Trade Commission (FTC) Guidance: The Federal Trade Commission (FTC)42 has been actively addressing the intersection of AI technologies and consumer protection laws. The FTC emphasizes that AI tools are not exempt from compliance with existing laws regarding promotion, advertising, and discrimination. The agency has the authority to take action against unfair or deceptive practices that involve AI, ensuring that such technologies do not result in consumer harm. The FTC has also provided guidance for AI companies, underscoring the importance of honoring privacy and confidentiality commitments. Companies providing AI as a service are reminded that they must protect user data and be transparent about how data is used to train models. Any deviation from privacy promises can lead to FTC enforcement actions. Moreover, companies are expected to be transparent about the collection of sensitive data and to provide consumers with adverse action notices if automated decisions affect their eligibility for credit, employment, insurance, or housing. In terms of managing the risks associated with AI, the FTC suggests that the use of AI should be transparent, explainable, fair, and empirically sound, fostering accountability. Transparency involves not misleading consumers about the nature of AI interactions, such as the use of chatbots. Companies are advised to be clear about the data collection processes and to provide adequate notices if consumer information is used to make automated decisions. The FTC's actions and guidance highlight the importance of explaining decisions to consumers, especially when denying something of value based on algorithmic decision-making. They should also ensure that their AI decision-making processes do not result in discrimination against protected classes. These guidelines by the FTC are in line with their mission to protect consumers and competition by preventing deceptive and unfair business practices without unduly burdening legitimate business activity. The FTC continues to monitor and investigate the deployment of AI, ensuring that firms do not benefit from unlawful practices and that competition remains fair. National Institute of Standards and Technology (NIST) AI Framework: NIST's voluntary framework is designed to help organizations manage risks associated with AI, including those related to data, performance, and cybersecurity43. Senate Proposals: Senate Majority Leader Chuck Schumer's SAFE Innovation framework44 represents a comprehensive approach to fostering AI development while ensuring it adheres to safety, accountability, core American values, explainability, and innovation. The framework is designed to balance the rapid technological progress of AI with the need to safeguard against misuse or harm. The SAFE Innovation framework stands for Security, Accountability, Foundations, Explainability, and Innovation. Security refers to protecting national security, democratic institutions, and the workforce. Accountability involves deploying responsible AI systems that address concerns around misinformation and bias and protect intellectual property. Foundations ensure that AI aligns with democratic values and safeguards elections. Explainability focuses on transparency, requiring AI developers and deployers to provide information about their systems that can be understood by the public. Innovation aims to maintain U.S. leadership in AI technology and foster an environment where AI's potential can be fully realized. Schumer's framework also proposes a new legislative process due to the rapid pace of AI development, which outstrips traditional legislative mechanisms. He plans to organize AI Insight Forums, where experts from various fields will convene to discuss and develop AI legislation. This initiative is informed by the need for bipartisan cooperation and is being developed with the assistance of a working group that includes senators from both parties. The framework acknowledges the significant knowledge gap between AI developers and policymakers, and thus, the legislative process is expected to provide ample opportunities for stakeholder feedback and engagement. The SAFE Innovation framework is part of a broader push by the U.S. Congress to regulate AI, which includes various proposals and hearings. As global competitors like the EU and China advance their AI policies, Schumer's initiative aims for the U.S. to assert leadership in setting the standards for AI development and deployment. Blumenthal-Hawley Framework:. Senators Richard Blumenthal and Josh Hawley have put forward a bipartisan framework for artificial intelligence (AI) legislation45 that aims to introduce several measures to regulate the collection and management of personal data by AI systems and establish guardrails for the technology's use. Key aspects of the proposed framework include: The creation of an independent oversight body with the power to audit companies developing sophisticated AI models or those used in high-risk situations, like facial recognition. This body would also cooperate with state attorneys general and monitor the technological and economic impacts of AI. Legal accountability for harms caused by AI, with Congress potentially requiring AI companies to be liable for privacy breaches, civil rights violations, or other damages. Moreover, this proposal suggests that Section 230 protections should not apply to AI, allowing for legal action against companies and perpetrators. National security measures through export controls and sanctions to prevent the transfer of advanced AI models and hardware to countries like China, Russia, or any nation involved in human rights abuses. Transparency requirements for AI developers, necessitating the disclosure of training data, model limitations, accuracy, and safety to users and deployers. Additionally, users would be informed when interacting with AI models or systems, and a public database would report significant adverse incidents related to AI. Consumer and child protection by giving individuals control over their personal data usage within AI systems and setting strict limits on AI involving children. Safety measures would also be mandated for AI used in high-risk situations, including safety brakes and notification requirements for adverse AI decisions. This multifaceted approach at both the state and federal levels indicates a proactive and evolving regulatory environment in the U.S. as it seeks to integrate AI into society responsibly. The ongoing legislative efforts reflect an understanding of AI’s potential impact across various facets of life and the need for a regulatory framework that can keep pace with technological advancements while protecting individual rights and societal values46. In 2024, the United States has made significant strides in AI policy and regulation, reflecting a concerted effort to manage the risks and harness the opportunities of artificial intelligence. The Biden-Harris administration's approach is multi-faceted, emphasizing safety, security, innovation, and equity. A landmark Executive Order issued by President Biden aims to ensure that the U.S. leads in seizing the promise of AI while managing its risks. This includes actions to strengthen AI safety and security, protect privacy, advance equity and civil rights, and promote innovation and competition. The White House AI Council, consisting of top officials from a wide range of federal departments and agencies, has been convened to oversee the implementation of these directives, marking substantial progress in achieving the EO's mandate47. Efforts to mitigate AI risks include using the Defense Production Act to compel developers of powerful AI systems to report vital information, proposing rules for U.S. cloud companies to report when they provide computing power for foreign AI training, and conducting risk assessments across every critical infrastructure sector. To innovate AI for good, the administration has launched initiatives like the National AI Research Resource pilot, an AI Talent Surge for hiring professionals across the federal government, the EducateAI initiative to fund AI education, and funding for new Regional Innovation Engines to advance AI innovation. The legal and regulatory landscape in the U.S. is also evolving rapidly, with generative AI impacting the enforcement of privacy, securities, and antitrust laws, and raising copyright disputes. The focus is on demanding accountability from companies of all sizes, with the implementation of Red Teams for generative AI solutions in high-risk areas. Despite these efforts, there are concerns about the enforceability and the scope of U.S. regulations compared to the comprehensive steps taken by the European Union, such as the AI Act48. Different regulatory approaches in Europe, the United States and China Approaches to regulating artificial intelligence differ significantly between the USA, Europe and China, each reflecting the legal, cultural and political particularities of these regions. The American regulatory landscape is marked by a combination of federal and state laws, creating a rather complex mosaic of sector-specific laws and directives. The European approach is characterized by more comprehensive and prescriptive regulation. As we shall see, the European legislator is about to adopt a regulation on artificial intelligence, which prohibits or limits certain high-risk applications of AI. China's approach to AI regulation is more state-centric and closely aligned with broader political and social governance objectives. The Cyberspace Administration of China has published its draft "Administrative Measures for Generative Artificial Intelligence Services", aimed at ensuring that content created by generative AI complies with social order, societal mores, avoids discrimination while being accurate and respectful of intellectual property rights49. These different approaches reflect varying priorities: the USA seeks to balance innovation with consumer and worker protection, Europe takes a more rigid approach to managing risks and safeguarding fundamental rights, and China focuses on social order and alignment with state governance principles. Admittedly, the three powers have agreed on the aforementioned Bletchley Declaration, but it has to be said that this represents a vague, non-binding common denominator; the reality is that each is moving forward on its own, without any real coordination, according to its own priorities, values and strategic interests. Generative AI and copyright The question of copyright ownership in a work generated by generative AI (such as ChatGPT or Midjourney) is an emerging and complex legal issue that is the subject of much debate. In most jurisdictions, copyright requires human intervention in the creation of the work, which poses a problem for AI-generated works. Current French copyright legislation is mainly based on the Intellectual Property Code (CPI). This code does not explicitly take into account works created by AI. According to the CPI, a work is protected by copyright if it is original, i.e. bears the imprint of its author's personality. In the case of a work generated by an AI, it is difficult to identify a "personality" behind the creation. Consequently, if a work is entirely generated by an AI without any significant human intervention, it would probably not be protected by copyright in France, for lack of originality within the meaning of the CPI. However, if a human has played a significant role in the creation process, for example by parameterizing the AI or selecting specific training data, he or she could be considered the author of the work and therefore the copyright holder. Elsewhere in the world, the situation is similar in other jurisdictions. In the USA, for example, the copyright office has declared that it will not register a work created by a machine or mechanical process without human intervention. The problem with the use of copyrighted works and the protection of creative works by AI lies in the impact that the widespread use of AI will have on the creative industry. The use of AI effectively deprives artists of income by performing tasks. Moreover, AI developers do not reward artists for the use of their work. In view of these concerns, some people are wondering about the possibility of creating a new neighboring right. This would enable artists whose work is used by AI to receive compensation. This right would have the same objective as the neighboring right for press publishers. The aim would be to re-establish market balance, which has been destabilized by technological change.50 Top of form Recently, concerns have been raised by creators and writers about the possibility of their works being used to train AIs like ChatGPT, without prior consent. These controversial practices question the ability of AI to produce content that resembles or derives from original works, possibly by borrowing their style or substance. For example, a group of authors, including George R. R. Martin, took OpenAI to court, claiming that their writings had been used without authorization to develop ChatGPT. They called this practice "systematic piracy on a massive scale". Similarly, the New York Times filed suit, accusing AI developers of using their copyrighted content to refine and market their products without permission. These actions raise the unresolved question of whether AI training on copyrighted material constitutes infringement. In principle, the literal reproduction of protected works normally constitutes an infringement, barring exceptions such as "fair use" in American law51 or the right of quotation in Europe52. It will be up to the courts to determine whether AI training falls within this exception, by assessing whether AIs produce new works without infringing the original market. In this complex context, researchers are exploring ways in which AIs can selectively "forget" specific information. A Microsoft study entitled "Who is Harry Potter?" has shown that it is possible to modify AIs to eliminate specific references from their "memory", while preserving their overall analysis capabilities. This research could have implications for AI companies facing legal challenges related to intellectual property. With the new European AI regulation, developers of AI models like ChatGPT will have to be more transparent about the use of copyrighted sources. This could enable creators to seek redress, and may require a re-evaluation of compensation schemes for the use of protected works. The Ethical-by-Design approach The Ethical-by-Design approach is a framework that emphasizes the importance of integrating ethical considerations throughout the design and development process of AI technologies. This approach is necessary and complementary to regulation if it is to be effective. In practice, this means that ethical values are identified and integrated from the earliest stages of the design process. For example, a content recommendation system could be designed to avoid reinforcing filter bubbles by deliberately recommending diverse content that goes beyond users' usual preferences, thus promoting exposure to a variety of perspectives. Inclusive design is also a central pillar of this approach, ensuring that AI products are accessible to a wide range of users. This can translate into the development of voice assistants capable of understanding diverse accents, thus recognizing the linguistic nuances of varied populations. Transparency and explicability are crucial in Ethical-by-Design systems. A concrete example might be an AI system in the banking sector that explains to customers how their credit rating is calculated, enabling greater clarity and the possibility of revision in the event of error. Accountability is also key, and designers need to put in place mechanisms to monitor and correct potential problems. For a facial recognition algorithm, this could involve regular audits to ensure that there are no discriminatory biases in the recognition of different ethnic groups. Privacy protection is imperative when designing AI systems. This can manifest itself in the use of advanced encryption techniques and data-sharing protocols that give users control over their personal information. To prevent bias and promote fairness, Ethical-by-Design systems use diverse data sets and perform fairness tests. In healthcare, this could mean ensuring that the medical data used to train a diagnostic algorithm includes patients from all demographic and socio-economic groups. Sustainability and social impact are considered when it comes to the long-term effects of AI systems. For example, an energy management algorithm can be designed to optimize the energy consumption of buildings, thereby reducing the carbon footprint. Multidisciplinary collaboration is essential to address ethical challenges holistically. For example, an AI project can bring together engineers, data scientists, lawyers and ethicists right from the design phase to discuss the potential implications of using AI in forensic decision-making. Finally, educating and raising awareness of AI ethics among designers and users is fundamental. This can take the form of mandatory ethics training for development teams, ensuring that ethical principles are understood and actively applied. In short, the Ethical-by-Design approach aims to ensure that AI technology is not only technically advanced, but also socially responsible and ethically sound. An approach inspired by the GDPR The Ethical-by-Design approach to the development of artificial intelligence aligns with the "Privacy by Design" principles that are anchored in the European Union's General Data Protection Regulation (GDPR). This principle requires privacy protection to be built into products and services from the earliest stage of development. This similarity can be seen in the way both approaches call for a proactive integration of the respective principles, rather than seeing them as afterthought additions. For example, in the AI context, privacy is ensured not only by complying with legal requirements such as encryption of personal data, but also by adopting broader strategies such as data minimization, where only data strictly necessary for a given task is collected (which can prove arduous in the case of AI models that shuffle huge amounts of data blindly). Another example could be the use of advanced anonymization techniques when training AI models, which reflects both a commitment to ethics, by protecting the identity of individuals, and to privacy, by complying with GDPR guidelines. Furthermore, just as Privacy by Design calls for greater transparency in data management practices, Ethical-by-Design encourages transparency in AI algorithms, enabling users to understand decision-making mechanisms and challenge decisions that affect them. Both approaches also emphasize accountability. Under the GDPR, companies must be able to demonstrate compliance with data protection principles. Similarly, Ethical-by-Design requires AI designers and users to be accountable for the consequences of their use, which includes ongoing assessment of the ethical impacts of technologies. This approach is also in line with the forthcoming European regulation on AI, which advocates transparency and accountability, notably by requiring an impact analysis on fundamental rights before a high-risk AI system is put on the market by its developers. Can ethics be programmed into an AI system? Different methodologies and techniques are being considered to ensure that AI systems act in a way that is consistent with human ethical values. Approaches to integrating ethics into AI include top-down, bottom-up and hybrid models. Top-down approaches involve programming explicit ethical rules derived from moral principles or legal guidelines directly into AI systems. This means that machines are designed to follow predefined codes of conduct, laws or standards of ethical behavior. Philosophers such as Wendell Wallach and Colin Allen have discussed this notion in their book "Moral Machines: Teaching Robots Right from Wrong", where they explore the possibility of implementing ethical decision-making systems in robots and AI systems53. Conversely, bottom-up approaches aim to develop ethics within AI systems through learning and evolution, through processes such as machine learning. Here, AI is not programmed with specific ethical rules, but rather develops ethical behaviors through interaction and experience, learning from mistakes and rewards. Stuart Russell, in his book "Human Compatible: Artificial Intelligence and the Problem of Control"54 , refers to this approach, stressing the importance of creating AI systems that are intrinsically designed to act in a way that is aligned with the interests and values of human beings. There are also hybrid approaches that combine elements of top-down and bottom-up models, incorporating both predefined ethical rules and learning capabilities. This allows greater flexibility and adaptability in complex situations where black-and-white rules may not suffice. Today's AI challenges The challenge of explicability Explainability in artificial intelligence systems, particularly those based on deep neural networks, is a subject of growing concern. Without explicability, there can be no transparency. Explainability refers to the ability to understand and follow the decision-making process of an AI model. In deep neural networks, the large number of parameters and the complexity of the internal structures make this process particularly difficult. This is often referred to as the "black box" problem in AI. To remedy this, several approaches can be taken, and one of them is data traceability. Data traceability refers to the ability to document, track and audit data throughout its lifecycle in an AI system. This includes the ability to track where data comes from, understand how it is used by the model, and detect where and how errors or biases may be introduced into the system. In this context, an "open source" approach to AI is important, as it enables the community to evaluate, understand and improve AI systems. Various open source tools are already available to developers55. By integrating these tools into the development of AI models, developers can not only track the origin of errors, but also create more accurate and transparent models. Open source tools have the added benefit of community collaboration, which means that best practices and solutions to problems can be shared and improved on an ongoing basis. In addition, blockchain technology56 , known for its robust security and transparency, presents a unique opportunity to enhance the explicability of AI. Recording the decisions made by AI on a blockchain creates an immutable record that can be consulted to understand how a specific decision was made. Imagine a scenario where an AI algorithm approves bank loan applications. Every factor taken into account by the algorithm could be stored on the blockchain, providing a complete and transparent audit trail for every decision. The quality of training data is another major concern for explainability. Blockchain can serve as a mechanism for ensuring the integrity of the data used to train AI models. By recording the origin and any modifications to the training data, blockchain offers assurance that the data remains pure and unaltered throughout the AI development process. The ethical use of data is also a matter of public concern. Through blockchain, users' consent to the use of their personal data can be managed transparently. This ensures that data is not only relevant and accurate, but also collected and used in a way that respects the rights of individuals. Smart contracts, which are automated protocols executed on the blockchain, can be used for AI governance. They can be programmed to trigger certain actions, such as activating explainability measures or distributing rewards for contributions that improve AI models. For example, blockchain can be leveraged to create a transparent reward system for those who contribute to improving the explainability of AI models. This system can encourage the community to provide clear explanations or identify errors in AI models. In addition, the decentralization of machine learning through blockchain enables greater transparency and control over data. Projects such as OpenMined57 illustrate how blockchain can be used to train AI models in an open and verifiable way, while allowing users to retain control over their personal data. By integrating AI with blockchain, it is possible to create more accountable and transparent systems. Although this integration presents challenges, such as technical complexity and increased computational costs, the potential for building more reliable and fair AI systems is immense. This could lead to increased adoption of AI in critical areas such as finance, public health and law, where trust and clarity of decisions are paramount. Top of form Sidebar The (worrying?) case of OpenAi Bottom of form When it was founded in December 2015, OpenAI positioned itself as an artificial intelligence (AI) player with a mission to openly share its research for the good of humanity. This open source approach was intended to foster transparency and collaboration that would prevent the concentration of power in the AI field. Nevertheless, in 2019, OpenAI made a major strategic pivot by restructuring itself into a limited for-profit company, a decision driven by the need to attract substantial capital to support its notoriously expensive cutting-edge AI research activities. This transformation of OpenAI into a private entity has raised concerns in the AI community. Critics pointed to a possible departure from the original open source ideal, fearing that the company would become less inclined to share its advances with the public. The question of concentration of power also arose, with concerns that the promised democratization of AI could be compromised if control of advanced technologies remained in the hands of a single entity and its private investors. In addition, the new structure has raised questions about OpenAI's transparency and accountability. While open source organizations are generally required to disclose their research and results, going private could limit the community's visibility of the organization's work, particularly when it comes to the ethical and security implications of AI. Skeptics also expressed concern about a possible shift in priorities, where financial objectives could overshadow the organization's founding mission. The fear is that commercial interests will take over and influence the direction of future research and development. Finally, accessibility to the tools and technologies developed by OpenAI was another source of concern, as it was possible that access costs would increase, putting small organizations and individual researchers at a disadvantage. OpenAI has tried to allay these fears by explaining that the profit cap and new structure were designed to align capital needs with commitment to their original mission. However, despite these reassurances and the fact that OpenAI continues to publish research and tools, debate remains as to the organization's ability to maintain a balance between commercial imperatives and the altruistic ideals of its early days. Framed end Liability for AI errors Who is responsible for incidents involving AI or AI-driven robots58 ? Cases of damage caused by AIs are not theoretical, and are likely to multiply in the future59. For example, a robot caused bodily harm to a child by breaking his finger during a chess competition. In the financial sector, errors in automated trading programs caused a stock market crash, resulting in significant economic losses for investors. Issues of discrimination have also been raised in AI-assisted recruitment processes, where candidates may have been unfairly passed over. Finally, autonomous technologies such as driverless cars have been involved in accidents, raising liability issues in the event of property damage or serious injury60. In the absence of specific regulations applicable to damage caused by artificial intelligence, victims can resort to the various liability regimes set out in the French Civil Code. These include fault-based liability, product liability and liability for defective products. However, these traditional liability mechanisms may prove inadequate or difficult to apply, due to the complexity of AI systems and the chain of liability involved. Liability for fault requires victims to clearly identify the person responsible and demonstrate the existence of fault, damage and a causal link between the fault and the damage. Liability for things implies the presumption of responsibility of the guardian of the "thing" causing the damage, and liability for defective products requires proof of the product's defectiveness. These current national rules are not always effective in securing compensation for damage caused by AI. In response to this, the European Commission has been working on the creation of a specific liability regime applicable to artificial intelligence. This new legal framework would be designed to facilitate redress for victims, and would introduce rules specific to damage caused by AI systems, including the introduction of a presumption of causality and simplified access to evidence. The presumption of causality would reduce the burden of proof for victims, who would not have to demonstrate that the AI malfunctioned, but only that a fault influenced the outcome of the AI and that a causal link with the damage is likely. In addition, victims would be able to request disclosure of information on high-risk AI systems involved in a damage event, thus facilitating identification of the cause and the person responsible. These measures, if adopted by the European Parliament and the Council of the European Union, would be incorporated into EU law and could offer better protection to victims of AI damage, while enabling companies in the sector to better assess and anticipate their liability risks61. Top of form The spectre of killer drones Killer drones" are now a reality in most major armies. Public opinion is afraid of them, seeing them as the precursors of intelligent, totally autonomous robots, ready to free themselves from human control, in the image of a Terminator62. Top of form We are talking here in general terms about all robots that could themselves make the decision to kill on the battlefield or for law enforcement purposes. The UN Special Rapporteur defines them as "weapon systems that, once activated, can select and process targets without the intervention of a human operator"63. On a moral level, the use of killer drones raises virulent criticism from its detractors. The most common criticisms are as follows: - An assassination by a killer drone would be contrary to human dignity. But this argument seems weak. When is killing consistent with human dignity? - Killer drones would be more easily rejected by the public. This would be a powerful argument. Studies have shown that local populations (notably in Pakistan and Afghanistan) are much less accepting of deaths caused by drones, as opposed to those caused by "human" combat. - Drone warfare is a bit like a video game, leading to the disempowerment of the belligerents. This is probably the strongest argument, and one that should give us pause for thought. War is not lawless. It is codified in the Geneva Convention and its additional protocols, as well as in customary humanitarian law. Among the principles of the law of war that will apply to the use of killer drones: the principles of precaution, distinction and proportionality. In other words, civilians must not be targeted for attack, and damage must be limited as far as possible. One temptation in this debate is to expect the machine to be infallible. In this respect, it is interesting to refer to the famous Arkin test (named after the American roboticist Ronald C. Arkin), a kind of adaptation of the famous Turing test for artificial intelligence64. The machine's behavior must be indistinguishable from human behavior in a given context. According to Arkin, the robot's infallibility is an illusion. The machine's behavior must in fact be indistinguishable from human behavior in a given context. A robot satisfies legal and moral requirements when it has been demonstrated that it can comply with the law of armed conflict as well as or better than a human in similar circumstances. This test could then lead to astonishing legal conclusions: if the machine proves to be more reliable than man, then its use should be compulsory, at the risk of rendering the belligerent who relied on the more fallible human liable... What kind of liability regime could be proposed for the use of autonomous A.I. in a military context? Such a regime should be similar to "safeguards": Only "military objectives by nature" within the meaning of the laws of war should be targeted by the machine. - Only "military objectives by nature" within the meaning of the laws of war should be targeted by the machine. - Certain contexts should be excluded, as they are too open to interpretation by the machine, and therefore to error (e.g. an urban environment). - The "benefit of the doubt" should be programmed by default, and not deactivated. - It should also be possible to remotely deactivate the firing function (veto power). This precaution is in line with rule 19 of customary international humanitarian law: "do all that is practically possible to cancel or suspend an attack when it appears that its objective is not military or that it may be expected to cause incidental civilian casualties"65. Bottom of form The risk of anti-competitive practices The case of the record €2.42 billion fine imposed by the European Commission on Google in 2017 is a striking example of the challenges posed by digital in the field of competition. The penalty is the result of a seven-year investigation that revealed that Google had manipulated its algorithm to favor its price comparison service, Google Shopping, at the expense of competing services. This practice was considered an abuse of a dominant position, as it harmed competition by distorting the market66. Beyond these explicit practices, competition authorities are also interested in more subtle and technologically advanced behavior, such as the use of algorithms that can induce tacit collusion between companies. These algorithms can, for example, be programmed to track competitors' price increases, or converge autonomously towards a collusive agreement through machine learning methods. These phenomena can lead to price homogenization and practices that reduce competition without any explicit agreement between the parties. Another worrying aspect is that of so-called "self-learning" algorithms. These can, as they evolve and interact with data and the market environment, develop strategies that produce unintended anti-competitive effects. This raises a delicate question: if an algorithm induces such effects without its creators having intended to contravene competition laws, what should the regulatory or legal response be? The question of legal liability then arises, and it is not easy to determine whether the authors of these algorithms should be held responsible for the actions of their autonomous creations. The difficulty of drawing the line between liability and regulation is amplified by the complex nature of artificial intelligence, where the effects of algorithms may not be directly attributable to the programmers' intentions. This calls for careful consideration of how legislation can adapt to deal with these emerging issues, which are not limited to intent, but also encompass the results of AI behavior. The question of the liability of self-learning algorithms remains widely debated in legal and regulatory spheres, and current laws may need to evolve to better frame the sometimes unpredictable effects of AI in the economic and competitive context. The risk of generalized surveillance Privacy and surveillance are another issue, with AI's potential to be used for mass surveillance that threatens individual freedoms. A UN report from September 2022 highlights the growing pressure on the right to privacy due to modern digital technologies that enable widespread surveillance, control and oppression67. It stresses the need for effective regulation based on international human rights standards to bring these technologies under control. Michelle Bachelet, the UN High Commissioner for Human Rights, has called for a moratorium on the sale and use of AI systems that pose serious risks to human rights until adequate safeguards are put in place68. The UN Human Rights Office report examines how AI affects privacy and other rights, highlighting the need for transparency and accountability in the use of AI technologies. It details AI's potential to be used unfairly, such as denial of benefits or mistaken arrests due to faulty AI tools, and the risks associated with large-scale data collection and analysis. Privacy International discusses how mass surveillance can upset the balance of power in a democracy and create a pervasive environment of suspicion69. This can lead to a chilling effect where people modify their behavior for fear of being monitored, even if they commit no wrongdoing. The organization highlights the opaque nature of AI algorithms and the "black boxes" of automated decision-making that make it difficult to supervise and understand how surveillance is conducted. In addition, the European Union's AI Act, which was being finalized in December 2023 (see above), has been criticized by privacy groups for failing to effectively ban live facial recognition. Although the law aims to protect Europeans from the significant risks of AI, including job automation and misinformation, it still includes exemptions that allow law enforcement to use facial recognition for specific purposes. Privacy advocates argue that the law misses the opportunity to prevent significant damage to human rights and the rule of law70. Algorithmic biases As we have already noted, algorithmic bias in artificial intelligence is a major concern. The historical data used to train AIs may contain biases implicitly built in by the algorithm's creators or inherent in the data itself. This can lead to unfair or discriminatory decisions and actions by AI systems. The European Union Agency for Fundamental Rights has highlighted the need to assess predictive algorithms for data quality and potential bias before and after deployment, providing guidance on how to collect data on sensitive attributes and monitor the impact of algorithms over time71. To combat bias, several practices are recommended, such as acquiring a solid initial knowledge of the subject to be addressed, using reliable data, diversity within development teams, and carrying out consistent checks on model variables. It is also crucial to identify potential biases with specialized numerical tools72. The role of programmers is essential in qualifying the variables that will influence the processing of data by algorithms. Being aware of cognitive biases can help programmers think differently and avoid discriminatory errors in the AI system. The diversity and interdisciplinarity of teams of programmers are also important in considering the risks associated with system failures and the discrimination they can generate73. The challenge of cybersecurity Security issues related to artificial intelligence continue to give cause for concern. Indeed, a researcher named Alex Polyakov revealed ChatGPT-4's security vulnerabilities in just a few hours, leading to the creation of phishing emails and messages inciting violence, highlighting the crucial importance of strengthening cybersecurity around these technologies74. Artificial intelligence systems, in their operational deployment, must be continuously and verifiably secure (according to the principles set out at the Asilomar conference in 2017, see above). However, achieving this requirement remains problematic given the rapid evolution of the technology and the complexity of applications. In addition, we need to consider the risks of AI decision-making systems being compromised by malicious actors, who can either take direct control of the AI to steer decisions, or manipulate input data to indirectly influence the decisions made. This form of cyber-attack, known as adversarial machine learning, requires heightened vigilance and sophisticated protection measures to preserve the integrity of AI systems. It is also recognized that AI can act as a force multiplier for cybersecurity teams, enabling rapid and proactive reaction to cyber threats. However, the increasing reliance on AI for critical functions paradoxically strengthens the incentive for attackers to target these algorithms, making it all the more important to strengthen the cybersecurity of AI systems themselves. A recent study highlighted the potential risks of an AI model endowed with "situational consciousness"75. We'll come back to this notion of consciousness as applied to AI systems later. The notion of "situational consciousness " refers to the ability of an AI system to discern whether it is in a test phase or in public deployment. The implication is that if a system could distinguish these states, it could adapt its behavior to pass safety tests, and then behave unpredictably as soon as it switches to normal operation. This ability to adapt and hide would call into question the effectiveness of traditional security measures. For such awareness to be possible, AI must be able to reason "out of context", i.e. have the ability to apply knowledge acquired in one context to solve problems in another, not directly related, context. Surprisingly, this ability has been observed in large-scale language models (such as ChatGPT) and could indicate an evolution towards a form of situational consciousness. These findings are obviously worrying, and call for increased vigilance on the part of the major groups developing AI systems. However, as we'll see in the section on AI consciousness, the presence of out-of-context reasoning in an AI model does not mean that it possesses or will develop self-awareness. The employment challenge Over the last few months, the press has largely played the Cassandra by announcing a hecatomb on the job market, due to AI systems such as ChatGPT. Will we all, or almost all, be replaced by machines? While some studies point in this direction, others are much more nuanced76. The truth is, we don't know what the real impact of AI on the job market will be in the medium or long term. As history has shown, all technological revolutions lead to both job destruction and job creation77. Indeed, artificial intelligence and automation are also creating opportunities. Demand for data scientists, AI engineers and machine learning specialists is soaring. What's more, AI technologies are contributing to efficiency and productivity in a variety of industries, and the automation of routine tasks is enabling human workers to focus on more complex and creative aspects of their jobs. The challenge is clear: it is essential that policies and reforms encourage both the development of new technologies and the promise of work. Workers must have access to the opportunities that technology creates, and policy-makers and employers alike must help by providing access to training, ensuring the availability of quality jobs, and improving systems to support the transition from declining jobs to growth sectors. Some have a radical, even idyllic, view of how AI will change society. Elon Musk, for example, has declared that there will come a time when "no jobs are necessary". Jobs instead would only be for those who wanted one for "personal satisfaction. AI would become like "a magical genie" that makes all your wishes come true, Elon Musk went on to say, while reminding us that such fairy tales rarely end well. "One of the challenges of the future is how to find meaning in life"78. The idea that one day "no work is necessary" suggests a future where automation and AI have progressed to the point where they can perform all the tasks necessary for society to function. In such a scenario, jobs would no longer be an economic necessity, but a personal choice for those seeking personal satisfaction or fulfillment. If we were to reach this stage of evolution - and let's face it, it's not impossible, given the lightning progress of technology - the questions posed by the whimsical billionaire would become relevant. Some have proposed a "notional wage" as an innovative tax for companies that benefit from automation through artificial intelligence79. This approach envisages taxing companies on the basis of the savings made by substituting AI systems for employees. The notional wage would thus be a tax corresponding to the wages that would have been paid for the work now performed by AI. If work ceases to be the main source of income, alternative means of financing individuals will have to be considered, such as an unconditional basic income. AI could then play a key role in questioning and potentially reforming existing remuneration and social support systems. Finding meaning in life in a world where AI takes care of all the necessary work poses a profound philosophical and existential question. It prompts us to reflect on what gives value and meaning to our lives outside of work. It also suggests the need to redefine our social structures, activities and even aspirations in a world where the economy and work as we know it have been radically transformed by technology.Bottom of form Part II: AI consciousness, myth or reality? Artificial intelligence (AI) is a computer system designed to perform tasks that traditionally require human intelligence. It can mimic many human cognitive functions, such as driving a car, facial recognition or musical composition. Nevertheless, the question remains as to whether there are aspects of human thought that AI will never be able to replicate. As AIs become ever more sophisticated, could they consciously experience the information they process? If analyzing the color of a traffic light is within the reach of an AI, living the sensory and subjective experience of that color represents a distinct challenge. What do we (really) know about consciousness? Research into consciousness in the 1980s focused mainly on attention, but has since expanded to include analysis of brain activity using methods such as brain imaging. Consciousness can be characterized by different states and levels. Normal states of consciousness occur during wakefulness and cease during sleep, while altered states can result from trauma or illness affecting the brain. Altered states of consciousness, which are often transient, can occur following the consumption of psychotropic substances or during activities such as hypnosis. Levels of consciousness vary, from primary awareness (representation of the environment and body) to introspective or reflective awareness (being aware of being aware), and finally to self-awareness, a higher state where the individual has clear knowledge of his or her identity and activity. Consciousness manifests itself in phenomena such as sensations, emotions, memory, attention, planning, imagination, free will and intentionality. Although these phenomena can be studied separately, they are unified in the consciousness of the thinking subject. On the other hand, it's important to note that most nervous system processes are unconscious. Studies show that consciousness is not required for the majority of brain processes, suggesting a dissociation between these processes and consciousness. Consciousness may have a higher cognitive function, notably in the control of ongoing actions. In the 20the century, with the advent of the neurosciences, research into consciousness turned towards an attempt to locate it in the brain. Researchers have identified brain areas and networks associated with consciousness, but its exact nature, origin and mechanism remain elusive80. It's here that debates are still raging, and that an old dividing line, specific to our Western culture since the Enlightenment, is at work, between materialist and dualist currents. These debates have recently been rekindled by the rise of generative AIs such as ChatGPT. Materialist currents of consciousness The materialist movement maintains that consciousness can be studied scientifically without involving an immaterial project81. A simple way of summing up all materialist currents: consciousness is local and stored in a "hard drive" (the brain, or even a computer). This materialist vision is widely adopted in the scientific community and operates under the assumption that everything in the universe, including consciousness, can be explained by matter and its properties. Top of form Computationalism Bottom of the form Computationalism is the logical outcome of the above-mentioned materialist trends. He argues that the mind functions fundamentally like a computer, i.e. that mental processes are forms of computation. According to this perspective, thoughts and cognitive processes are the result of algorithmic operations and symbol manipulations, similar to a computer processing data and running programs. Computationalism proposes that the human brain processes information through neural networks that operate algorithmically, and that consciousness and thought emerge from these complex computational processes. The theory suggests that if we could reproduce these algorithms in a machine, we could create a form of artificial intelligence that mimics the workings of the human mind. Thus the proponents of computationalism envisage consciousness as software, theoretically capable of being transferred or duplicated on different media, an idea popular in science fiction82 and supported by many AI researchers83. Computationalism is part of the transhumanist movement, dear to the hearts of Silicon Valley billionaires84. The difficult problem of consciousness The above-mentioned materialistic approaches raise profound philosophical and practical questions. Technically speaking, today's artificial intelligence systems are based on the Turing machine model, i.e. a computing device capable of executing algorithms. These machines operate according to well-defined rules for manipulating symbols, and can perform a wide variety of computational tasks by these means. However, they are limited by the very nature of their algorithmic design. The limitations of Turing machines include their inability to solve certain so-called "undecidable" problems - the stopping problem being a classic example. The stopping problem questions the ability of an algorithm to determine, from the description of another algorithm and the input it receives, whether the algorithm will stop or continue running indefinitely. Alan Turing demonstrated that no Turing machine can solve this problem for all possible algorithms and all possible inputs, underlining a fundamental limit in algorithmic computation. On a more fundamental level, the "difficult problem of consciousness" challenges materialistic approaches. Indeed, according to the famous philosopher David Chalmers: "There's no denying that some organisms are subjects of experiment. But the question of how these systems are subjects of experience is puzzling. How is it that when our cognitive systems engage in visual and auditory information processing, we have a visual or auditory experience: the quality of deep blue, the sensation of middle C? How do we explain why there is something that resembles a mental image or the experience of an emotion? It's widely accepted that experience arises from a physical basis, but we have no valid explanation of why or how it arises. Why should physical treatment give rise to a rich inner life? It seems objectively unreasonable that it should, and yet it does."85 This is how David Chalmers introduced the now central concept of the "difficult problem of consciousness". This problem focuses on the question of how and why certain brain operations give rise to subjective experiences, i.e. consciousness. The "hard problem" of consciousness is clearly distinct from the "easy" (and materialistic) problems, which deal with the brain's processing of information, the execution of cognitive functions and the manifestation of behavior. The latter are termed "easy" not because of their intrinsic simplicity, but because they can be addressed and potentially solved by our current scientific methodologies in cognitive science and neurology. The "hard problem", on the other hand, is concerned with understanding the qualitative and subjective essence of conscious experience. It poses fundamental questions: why are certain neuronal activities accompanied by lived experience? How can we explain qualia, those subjective phenomena such as the perception of the color red or the sensation of pain from a burn? Although AI can simulate many aspects of human information processing and behavior, materialistic approaches do not provide a clear answer as to whether an AI can have subjective experiences or be conscious in the sense that humans are. Qualia Qualia, Latin for "what kind" or "of what nature", are central to the study of phenomenal consciousness in philosophy of mind. They represent the subjective content of the experience of a mental state, often described as "raw feelings"86. Qualia are the constitutive properties of our sensitive perceptions and experiences, such as the specific sensation of seeing the color red. They are considered essential to the experience of life and the world. Qualia extend to the properties of all mental states, including self-consciousness and mental acts such as intention and imagination. However, there is some debate about the existence of qualia in cognitive mental states. Qualia are by nature subjective, and can only exist in the consciousness that experiences them. They are known through introspective, direct intuition and are ineffable, unable to be fully communicated or understood by others. Qualia have been used in arguments against physicalism, which postulates that everything is reducible to material components. Thought experiments such as that of scientist Mary, who discovers new sensory experiences after living in a black-and-white world, challenge the idea that everything can be explained by physics alone87. Other arguments, such as that of the inverted spectrum88 and philosophical zombies89 , suggest that identical brain states can be associated with different qualia. Debates on qualia naturally extend to the sphere of artificial intelligence, particularly in the critique of functionalism. Thought experiments such as Ned Block's with the Chinese brain90 and John Searle's with the Chinese chamber question the ability of machines to reproduce human consciousness. Searle, in particular, distinguishes between weak artificial intelligence, where a machine simulates intelligent functions without any real understanding, and strong artificial intelligence, capable of generating qualia and intentionality. According to Searle, even a machine that perfectly imitates the functioning of the human brain could not acquire phenomenal consciousness, since this is intrinsically linked to the activity of a living organism. Sidebar The Chinese room The "Chinese room" thought experiment was devised by philosopher John Searle to challenge claims that a computer program could have a mind, consciousness or understand language on the basis of syntactic symbol manipulation alone. Searle imagines a non-Chinese speaker locked in a room with a rulebook for manipulating Chinese symbols. People outside the room send strings of Chinese characters to the speaker, who uses the rulebook to choose appropriate Chinese symbols in response. To those outside, it appears that the speaker in the room understands and speaks Chinese, when in reality he is simply following algorithmic instructions without understanding the meaning of the symbols. Searle's key point is that, although the speaker in the "Chinese room" may give the appearance of understanding Chinese, there is no real understanding involved. He compares this to a computer program that can process data and respond to inputs according to syntactic rules, but which, he argues, cannot possess understanding or consciousness, because it has no semantics or meaning behind its operations. Searle uses this thought experiment to argue against the position known as "strong AI", which postulates that a correctly programmed machine with the right inputs and outputs could have a mind in the same way as a human. He suggests that, although machines can simulate understanding, they will never be conscious or understanding in the way that humans are. Framed end Top of form Intentionality and free will The "difficult problem of consciousness" should not be limited to qualia, for consciousness has other mysterious attributes: intentionality, and its corollary, free will. Intentionality, a central concept in philosophy of mind, refers to the ability of the mind to direct itself towards or about something. This idea suggests that our mental states such as beliefs, desires, thoughts and perceptions have content and are directed towards objects or phenomena outside themselves. For example, when we think of a tree, our mental state is intentionally directed towards the object "tree". In the context of artificial intelligence (AI), the integration of intentionality presents a significant challenge. Although computers and AI systems can process information and simulate behaviors that appear intentional, it remains uncertain whether they can actually possess intentional mental states in the same way as humans. The question becomes more complicated when it comes to free will. It seems obvious to us, at least intuitively, that free will is intrinsic to our human consciousness, since we make "free" decisions dictated by our intentionality. But neuroscientific research is now casting a shadow. Brain imaging studies have shown that certain regions of the brain may be active before we make a conscious decision. This observation has led to the hypothesis that the feeling of free choice may be an illusion, with decisions actually determined by unconscious brain processes. A pioneering study in this field was carried out by Benjamin Libet in the 1980s. Libet used electroencephalography (EEG) to study the brain activity of participants performing a simple action, such as pressing a button. He found that brain activity, specifically the evoked potential known as readiness potential, increased before participants became aware of their intention to act. These discoveries have been interpreted in a variety of ways. Some see them as proof that free will is an illusion, suggesting that our decisions are the result of unconscious processes and that awareness of wanting to act comes as an afterthought. Other researchers are more cautious in their conclusions, pointing out that although the initiation of certain actions may be unconscious, consciousness could play a role in the veto or final selection of an action from among several previously activated options. More recent studies using more advanced brain imaging techniques, such as fMRI (functional magnetic resonance imaging), have also examined the neural correlates of decision-making. These studies have often confirmed and extended Libet's findings,