Lecture 09 - Introduction to AI_AIAct PDF
Document Details
Uploaded by HardWorkingAestheticism
Technical University of Munich
2024
Chiara Ullstein, Prof. Jens Grossklags
Tags
Related
- Artificial Intelligence (AI) Journey Through History and Future PDF
- QuestionBank_M.Sc.(IT)(AI &ML)_I__ARTIFICIAL INTELLIGENCE.pdf
- CSE1100 Lecture 3C - Artificial Intelligence (AI) PDF
- A mesterséges inteligencia és a szerzői jog kapcsolata PDF
- Artificial Intelligence-Driven Risk Management for Supply Chain Agility PDF
- AI and IT Law Issues (PDF)
Summary
This lecture provides an introduction to artificial intelligence, discussing different views and definitions. It also touches on the topic of AI regulation from a European perspective, focusing on the EU AI Act.
Full Transcript
IT and Society Lecture 9: Artificial Intelligence - Introduction Chiara Ullstein, M.Sc. Prof. Jens Grossklags, Ph.D. Professorship of Cyber Trust Department of Computer Science School of Computation, Information, and Technology Technical University of Munich June 17, 2024 Recap - Nudging “Don’t...
IT and Society Lecture 9: Artificial Intelligence - Introduction Chiara Ullstein, M.Sc. Prof. Jens Grossklags, Ph.D. Professorship of Cyber Trust Department of Computer Science School of Computation, Information, and Technology Technical University of Munich June 17, 2024 Recap - Nudging “Don’t push. Don’t pull. ‘Nudge’.” – Encourage people to make decisions that are in their broad self-interest through a relatively subtle policy shift – … or in the interest of somebody else? – Countless government and outside-government “nudge units“: Presumably all have their specific agenda. Classical nudging versus digital nudging – Provided many examples: Look around for more. – Different classes of nudges: Default Option Social Proof Heuristics Reminder Providing Feedback Element of Entertainment Disclosure 2 Recap – Societal-scale Mechanisms Chinese Social Credit System – Economic and societal needs versus fear of dystopian future – Focuses on economic factors and moral values – Implementation at various levels and with “experimental“ agenda Empirical study reveals fascinating differences regarding the treatment of “good“ and “bad“ individuals or organizations Closely observed worldwide. A test case for more ambitious efforts by “nudge units“? 3 What is Artificial Intelligence? No clear consensus on the definition of AI - John McCarthy coined the phrase AI in 1956 - Developed a Q&A on this subject Q. What is artificial intelligence? A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human or other intelligence, but AI does not have to confine itself to methods that are biologically observable. Q. Yes, but what is intelligence? A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines. http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html 4 Different Views Haugeland (1985): The Whinston (1992): The exciting new effort to study of the computations make computers think … that make it possible to machines with minds, in perceive, reason and act. the full and literal sense. Thinking humanly Thinking rationally Acting humanly Acting rationally Kurzweil (1990): The art Luger and Stubblefield of creating machines that (1993): The branch of perform functions that computer science that is require intelligence when concerned with the performed by people. automation of intelligent behavior. Preferred technical view: Acting rationally Rational: maximize goal achievement; no mistakes 5 Another “Working Definition” of AI Artificial intelligence is the study of how to make computers do things that people are better at or would be better at if: They could extend what they do to a World Wide Web-sized amount of data Not make mistakes Clearly motivated by the “acting rationally“ view. 6 Is AI Rational? Are Humans? Comic (1990?) Humans are amazing in their abilities, but also boundedly rational. 7 More Definitions? I am glad you asked… “AI can have two purposes. One is to use the power of computers to augment human thinking, just as we use motors to augment human or horse power. Robotics and expert systems are major branches of that. The other is to use a computer's artificial intelligence to understand how humans think. In a humanoid way. If you test your programs not merely by what they can accomplish, but how they accomplish it, then you're really doing cognitive science; you're using AI to understand the human mind.” Herbert Simon 8 How to Measure “Success“ in an AI world? Turing Test Add vision and robotics to get the total Turing test. One critique of Turing test: Focus on Turing test leads to non-serious, gimmick-like work. 9 Eliza: Psychotherapist Program Simple strategies used: – Keywords and pre-canned responses “Perhaps, I could get along with my mother.“ or more generally: + {mother | father | brother | sister …} à “Can you tell me more about your family?“ – Parroting: “My boyfriend made me come here?“ à “Your boyfriend made you come here?“ – Highly general questions: Conceived by Joseph Weizenbaum In what way? in 1966 Can you give a specific example? 10 Loebner Prize (1991 - 2019) Forum for competitive Turing tests – Rules evolved, e.g., 5 minutes conversation until 2003; later more than 20 minutes – Bots cannot be connected to the Internet Categories: – Gold prize (audio and visual) – Never won No successful Turing test – Silver prize (text only) – Never won according to current rules – Bronze medal: Computer system with the “most human” conversational behavior in a given year Competitions are a central factor in AI progress! 11 Steve Worswick 5-Times Winner of most human-like chatbot with his System Mitsuku Interview with Worswick: "What keeps me going is when I get emails or comments in the chat-logs from people telling me how Mitsuku has helped them with a situation whether it was dating advice, being bullied at school, coping with illness or even advice about job interviews. I also get many elderly people who talk to her for companionship." Commentary: Any advertiser who doesn't sit bolt upright after reading that doesn't understand the dark art of manipulation on which their craft depends. Wall Street Journal "Advertising's New Frontier: Talk to the Bot“ (2015) 12 Ashley Madison Data Breach Site for extra-marital affairs July 2015: data theft including emails, names, home addresses, sexual fantasies and credit card information— and threatened to post the data online … and code for bots Data fields tell us that 20 million men out of 31 million received bot mail, and about 11 million of them were chatted up by an automated “engager.” 13 https://gizmodo.com/ashley-madison-code-shows-more-women-and-more-bots-1727613924 Presentation by Sundar Pichai (Google CEO): Another Much More Sophisticated Example Google Duplex: A.I. Assistant – Natural language processing – Deep learning – Text to speech conversion What is Deep Learning? – Typically based on artificial neural networks – Each level of the network learns to transform its input data into a slightly more abstract and composite representation More recent example: GPT-4o „With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network“ https://openai.com/index/hello-gpt-4o/ https://www.youtube.com/watch?v=D5VN56jQMWM 14 Strong versus Weak AI Strong AI is artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that can successfully perform any intellectual task that a human being can. – Primary goal of artificial intelligence research and an important topic for science fiction writers and futurists – Strong AI is also referred to as "artificial general intelligence“ or as the ability to perform "general intelligent action" – Science fiction associates strong AI with such human traits as consciousness, sentience, sapience and self-awareness Weak AI is an artificial intelligence system which is not intended to match or exceed the capabilities of human beings, as opposed to strong AI, which is. Also known as applied AI or narrow AI. – The weak AI hypothesis: Philosophical position that machines can demonstrate intelligence, but do not necessarily have a mind, mental states or consciousness What is possible? 15 How Intelligent or Conscious Can Machines Get? A critique. The Chinese Room Experiment John Searle (1980) 16 [Telefonica: Surviving the AI Hype –Fundamental concepts to understand Artificial Intelligence] AI Winter(s) in the 1970s – 1990s After rapid expansion, there is often a period of disillusionment and contraction – Doubts in the feasibility of the approach & too many promises Dartmouth Conference (1956): „Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.“ – Problems: Limited computing power Combinatorial explosion End of large-scale funding 17 18 Source: State-of-the-Art Mobile Intelligence: Enabling Robots to Move Like Humans by Estimating Mobility with Artificial Intelligence Discuss “Complementary emerging technologies such as machine learning, blockchain […] have moved significantly along the Hype Cycle since 2016.” https://www.gartner.com/ smarterwithgartner/ top-trends-in-the-gartner-hype-cycle -for-emerging-technologies-2017/ 19 2018 Version Machine learning as a standalone general concept is gone from this version Now focus on “democratized AI“ = availability to the masses https://www.gartner.com/sma rterwithgartner/5-trends- emerge-in-gartner-hype- cycle-for-emerging- technologies-2018/ 20 2019 Version What has changed? “Some technologies will provide ” https://www.gartner.com/smarter withgartner/5-trends-appear-on- the-gartner-hype-cycle-for- emerging-technologies-2019/ 21 2023 Version What has changed? High expectations for generative AI https://www.gartner.com/en/news room/press-releases/2023-11-28- gartner-hype-cycle-shows-ai- practices-and-platform- engineering-will-reach- mainstream-adoption-in- software-engineering-in-two-to- five-years (Source: November 2023) 22 Fears related to AI Impact on the job market (see first lecture): – Is AI primarily job-replacing? – Is AI primarily job-enabling? Opposing views about a glorious AI-centric future versus a dystopian AI-dominated future – Who is correct? 23 AI for Good 24 AI for Good - Topics Sustainable AI: Food, energy and water Environment and AI: Healthy oceans, protect wildlife Look for Health and AI: Health, sleep, nutrition examples Transparency and AI: Fighting corruption Education and AI: Personalized education Also: Harvesting human intelligence (for good) as a byproduct of fighting artificial intelligence advances Prime example: Captcha à reCaptcha (completely automated public Turing test to tell computers and humans apart) 25 AI regulation as a mode to foster trustworthy AI (European perspective) EU Artificial Intelligence Act (AI Act) We will discuss: 1. Procedure 2. Legislation (scope, risk-level, compliance) 26 Overview on the EU AI Act EU AI Act: first-ever legal framework on AI addresses risks of AI fosters trustworthy AI in Europe and beyond positions Europe to play a leading role globally aims to provide AI developers, deployers, and users with clear requirements and obligations seeks to reduce administrative and financial burdens for business, in particular SMEs Proposed by the European Commission in 2021 à right of legislative initiative Legislators: The Council of the European Union and the European Parliament Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, https://digital-strategy.ec.europa.eu/en/policies/plan-ai; https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383 27 Putting the EU AI Act into Context AI Act is part of wider package of policy measures → guarantee safety and fundamental rights of people and businesses I. Regulatory Framework (AI Act) II. Coordinated Plan on AI key policy objectives: key aims: accelerate investment, act on strategies and align policy to avoid fragmentation III. AI innovation package support for AI start-ups and SMEs Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, https://digital-strategy.ec.europa.eu/en/policies/plan-ai; https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383 28 3 main institutions involved in EU decision-making European Commission (EC) European Parliament (EP) Council of the European Union1 & European Council2 1 informally known as the Council, is composed of national ministers from each EU country 2 is the body of leaders (heads of state or government) of the 27 EU member states https://www.consilium.europa.eu/en/european-council-and-council-of-the-eu/ 29 EU decision-making: Ordinary legislative procedure The ordinary legislative procedure consists in the joint adoption by the European Parliament and the Council of the European Union of a regulation, directive or decision in general on a proposal from the Commission. Source: https://www.consilium.europa.eu/de/council-eu/decision-making/ordinary-legislative-procedure/ pa.eu/en/european-council/members/; https://www.consilium.europa.eu/en/council-eu/decision- making/ordinary-legislative-procedure/; https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=LEGISSUM:ordinary_legislative_procedure 30 Overview on the process 2019 WE ARE HERE final signature adoption publication Ordinary legislative procedure Pro- Amend- trilo- Public Debate Act posal ments gues1 April 2021 start: June 14, 2023 end: Dec. 9, 2023 _ Elect member state _ Public Consultations _ Advocacy governments _ National/ EU-wide Campaigning _ Elect Members of the _ Contacting individual MEPs European Parliament Amendments published: 1 optional trilogues EP: June 14, 2023 = informal interinsti- Council: Nov. 25, 2023 tutional meetings Source: https://www.europarl.europa.eu/infographic/legislative-procedure/index_en.html; https://www.consilium.europa.eu/en/council-eu/decision-making/ordinary-legislative-procedure/; https://www.consilium.europa.eu/en/council-eu/decision-making/ordinary-legislative-procedure/; https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai 31 Example: Changes suggested to the AI Act Proposal Different committees of the EP suggest amendments. List of officially published documents with amendments: https://artificialintelligenceact.eu/documents/ The suggested changes by the EP are summarized into a document called a Example amendment „negotiating mandate.“ This mandate allows the responsible Members of the European Parliament to enter discussions with the Council to find a political agreement. Additionally, a multitude of comments and opinions exist from academics or non-governmental organizations, etc., which have influenced the amendments. Amendments = suggested changes to the AI Act Proposal Source: https://artificialintelligenceact.eu/wp-content/uploads/2022/06/AIA-IMCO-LIBE-Report-All-Amendments-14-June.pdf 32 Example: Differing positions on Facial Analysis AI Facial Analysis AI (May 2023) _ Both classification and identification/ verification are regulated by the AI Act European Commission (Source: AI Act proposal – April 2021): _ high-risk: ‘real-time’ and ‘post’ remote biometric identification of natural persons _ harmonized transparency rules for emotion recognition systems and biometric categorization systems European Parliament (Source: negotiating mandate - 14 June 2023): _ prohibition: “[b]iometric categorisation systems using sensitive characteristics” and “[e]motion recognition systems in law enforcement, border management, workplace, and educational institutions” Council of the European Union (Source: general approach - 25. Nov. 2022): _ additional transparency obligation to inform when exposed to emotion recog. systems à Task of EP and the Council to find a political agreement, which resembles compromises. 33 Conclusion of the trilogues – A political agreement was found (end of the trilogues) 09. December 2023 Source: European Parliament (2024). https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai; https://www.euractiv.com/section/digital/news/ai-trilogues-round-one-the-eucs-opposition-front/ ; https://www.euractiv.com/section/artificial-intelligence/news/european-union-squares-the-circle-on- 34 the-worlds-first-ai-rulebook/ European Parliament approves the AI Act à 1st Reading (after the trilogues) 13. March 2024 Adopted Text Plenary Vote: Html AI Act endorsed by MEPs with PDF _ 523 votes in favour, PDF made _ 46 against and available to _ 49 abstentions. the Council Source: European Parliament (2024). https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law; Votes: https://www.europarl.europa.eu/doceo/document/PV-9-2024-03-13-RCV_EN.html; Press Conference: https://multimedia.europarl.europa.eu/en/webstreaming/press-conference-by-brando-benifei- 35 and-dragos-tudorache-co-rapporteurs-on-ai-act-plenary-vote_20240313-1100-SPECIAL-PRESSER The AI Act has been approved by the Council | 21 May 2024 (after the trilogues) Law that the Council approved: https://data.consilium.europa.eu/doc/document/ PE-24-2024-INIT/en/pdf Signature of text has yet to happen (17 June ’24) Source: Council of the European Union (2024). https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide- rules-on-ai/ 36 Next Steps for the EU AI Act expected for mid July 2024 20 days after publication in the after 6 after 9 after 12 after 24 after 36 Official Journal months months months months months Entering into Bans on Codes General- Fully Obligations for high- force prohibited AI of practice purpose AI applicable: risk systems (high-risk expected for systems are ready systems limited risk, AI systems that are August 2024 and minimal risk, already regulated by Governance high-risk incl. other EU legislation – rules Annex III Annex I) Source: European Parliament (2024). https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law; Luca Bertuzzi (2024). EU‘s AI rulebook to be published in July, enter into force in August. https://mlex.shorthandstories.com/eus-ai-rulebook-to-be-published-in-july-enter-into-force-in-august/index.html 37 Risk-based approach to AI Regulation: Definition of risk Definition of ‘risk’ in Article 3(2) of the EU AI Act ‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm Reference to the EU Charter of Fundamental Rights in Recital 1 of the EU AI Act The purpose of this Regulation is […] to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. Source: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf 38 When does the EU AI Act apply? à Scope (Article 2; excerpt) 1. This Regulation applies to: (a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those Who providers are established or located within the Union or in a third country; What Where Important: AI Act applies if output produced by the AI system is used in the Union The regulation also applies to deployers , importers and distributors, product manufacturers (e.g., if product marketed under own brans includes AI system by other provider), authorized representatives of providers (e.g., if provider not in the European Union), affected persons that are located in the Union. Exceptions exist: military, defense, or national security purpose; public authorities in a third country, international organizations; sole purpose of scientific research and development; research, testing, or development activity prior to their being placed on the market or put into service; purely personal non-professional activity; free and open source licenses. Source: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf 39 Structure AI Act: What does the regulation cover? (Excerpt) Prohibited Artificial Intelligence Practices (Art. 5) High-risk AI Systems (Art. 6-49) Including: Classification, Requirements, Obligations, Notifying Authorities and Notified Bodies, Standards, Conformity Assessment, Certificates, Registration Transparency Obligations for Providers & Deployers of Certain AI Systems (Art. 50) General-purpose AI Models (Art. 51 - 56) Including: Classification, Obligations for providers of general-purpose AI with/without systemic risk Measures in Support of Innovation (Art. 57-63) Governance (Art. 64 – 70) EU Database for High-risk AI Systems (Art. 71) Chapter IX: Post-market Monitoring, Information Sharing, Market Surveillance (Art. 72-94) Codes of Conduct and Guidelines (Art. 95-96) Delegation of Power and Committee Procedure (Art. 97-98) Penalties (Art. 99 - 101) Specifications can be found in the Annexes Source: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf 40 Risk-based approach to AI Regulation: Different levels of risk Prohibited Artificial Unacceptable risk à anything considered a clear threat to EU citizens Intelligence Practices Examples: social scoring by governments, toys using voice assistance that encourages dangerous behavior of children. Regulatory measure: banned High-risk AI Systems High risk (next slide) Transparency Obligations Limited risk à intention: enable informed decisions Example: chatbots, deepfakes, emotion recognition systems for Providers & Deployers Regulatory measure: inform natural person about exposure/ disclose that the of Certain AI Systems content has been artificially generated or manipulated by labeling content as such; ensure outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated Codes of Conduct and Minimal risk à minimal or no risk for citizen’s rights or safety Guidelines Examples: AI-enabled video games or spam filters (vast majority of AI systems currently used in the EU fall into this category) risk levels as reflected in the articles Regulatory measure: none, optional codes of conduct and guidelines of the EU AI Act (previous slide) Source: https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en (last accessed: June 15, 2024) 41 Risk-based approach to AI Regulation: High-risk AI systems Biometrics (e.g., remote biometric identification systems, biometric categorisation, emotion recognition; application of some of these systems are prohibited) Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams) Safety components of products (e.g. AI application in robot-assisted surgery) Employment, management of workers and access to self-employment (e.g. CV sorting software for recruitment procedures) Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan) Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence) Migration, asylum and border control management (e.g. automated examination of visa applications) Administration of justice and democratic processes (e.g. AI solutions to search for court rulings) Source: https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en (last accessed: June 15, 2024) 42 Compliance procedure for providers of high-risk AI systems listed on next slide Source: https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en (last accessed: June 15, 2024) 43 Obligations for providers of high-risk AI systems High-risk AI systems will be subject to strict obligations before they can be put on the market: adequate risk assessment and mitigation systems; high quality of the datasets feeding the system to minimise risks and discriminatory outcomes; logging of activity to ensure traceability of results; detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance; clear and adequate information to the deployer; appropriate human oversight measures to minimise risk; high level of robustness, security and accuracy. Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (last accessed: June 15, 2024) 44 Many requirements are rather vague Many vague requirements, examples: … Responsible standardization body: … 45 From high-level requirements to technical standards The EU AI Act does not mandate specific technical solutions/ approaches but rather high- level requirements; Technical solutions for the fulfillment of the requirements in practice will be specified primarily in the form of technical standards; Standards capture best practices and state-of-the-art techniques and methods (in trustworthy AI) Standards will be based on existing standards (which have been published independent of the EU AI Act) as well as voluntary (and successful) approaches/ best-practives (that have been suggested by industry and academia), e.g.: ISO/IEC 42001:2023 Artificial Intelligence Management System Datasheets for Datasets, The Dataset Nutrition Label, Model Cards and AI Factsheets à Transition from voluntary practices to hard legal requirements Source: Hupont et al. (2023).https://doi.org/10.1109/MC.2023.3235712, p. 19; T. Gebru et al., “Datasheets for datasets,” Commun. ACM, vol. 64, no. 12, pp. 86–92, Nov. 2021, doi: 10.1145/3458723; S. Holland, A. Hosny, S. Newman, J. Joseph, and K. Chmielinski, “The dataset nutrition label: A framework to drive higher data quality standards,” 2018, arXiv:1805.03677; K. S. Chmielinski et al., “The dataset nutrition label (2nd gen): Leveraging context to mitigate harms in artificial intelligence,” 2022, arXiv:2201.03954; M. Mitchell et al., “Model cards for model reporting,” in Proc. Conf. Fairness, Accountability, Transparency (FAT), Jan. 2019, pp. 220–229, doi: 10.1145/3287560.3287596; M. Arnold et al., “FactSheets: Increasing trust in AI services through supplier’s declarations of conformity,” IBM J. Res. Develop., vol. 63, no. 4/5, pp. 6:1–6:13, Jul./Sep. 2019, doi: 10.1147/JRD.2019.2942288; “OECD Framework for the Classification of AI Systems: A tool for effective AI policies,” OECD, Paris, France, 2022. [Online]. Available: https:// oecd.ai/en/classification. ISO/IEC 42001:2023 https://www.iso.org/standard/81230.html 46 Studies on the distribution of the risk categories Hauer et al. (2023): Classified 514 cases Applied AI Study (2023): 18% of the AI systems are in the high-risk class, 42% are low-risk, and for 40% it is unclear whether they fall into the high-risk class or not. Thus, the percentage of high-risk systems in this sample ranges from 18% to 58%. One of the AI systems may be prohibited. Most high-risk systems are expected to be in human resources, customer service, accounting and finance, and legal. Note: Studies referred to the AI Act Proposal from 2021. Shared may differ for adopted AI Act. Source: Hauet et al. (2023). Quantitative study about the estimated impact of the AI Act. Available at: https://arxiv.org/abs/2304.06503; AppliedAI (March, 2023). AI Act: Risk Classification of AI Systems from a Practical Perspective. Available at: https://www.appliedai.de/en/hub-en/ai-act-risk-classification-of-ai-systems-from-a-practical-perspective 47 Takeaways Artificial Intelligence research, practice and deployment has a long and rich history: Lots of interesting facets (booms and busts) Very different directions: E.g., modeling human intelligence versus creating actionable systems delivering output that is difficult to accomplish for humans EU AI Act first legally-bindinding regulation on AI entering into force in mid 2024 (aim: promote the uptake of human centric and trustworthy AI) 48 Tackle Hard(er) Questions Arguments by Joseph Weizenbaum – It is not just wrong, but dangerous and, in some cases, immoral to assume that computers would be able to do everything given enough processing power and clever programming. – “No other organism, and certainly no computer, can be made to confront genuine human problems in human terms.” (Computer Power and Human Reason: From Judgment to Calculation, 1976) And yet we use computers and AI for exactly that more and more. 49 No more for today. The End.