Summary

This document discusses the history of artificial intelligence (AI), including the hype around different AI approaches. It touches on the potential for AI to cause social harm due to bias, misalignment with human goals, and potential for errors. The document examines the comparison of AI cognition with human cognition through various tasks like the false-belief and no-belief tasks.

Full Transcript

(Last) Quiz and Attendance! Reminders Paper #3 due December 9th During class time on December 9th: Come here (304 Barnard Hall) if you want to take the final Join Zoom (through Canvas) if you’d like to attend a review session, and take the final on December 18th as officially sc...

(Last) Quiz and Attendance! Reminders Paper #3 due December 9th During class time on December 9th: Come here (304 Barnard Hall) if you want to take the final Join Zoom (through Canvas) if you’d like to attend a review session, and take the final on December 18th as officially scheduled Artificial Intelligence Hype level ”We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956… An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems…” 1956 1960s 1970s 1980s 1990s 2000s 2010s 2020s Hype level Newell & Simon’s Physical Symbol System hypothesis: any system exhibiting intelligence must operate through symbol manipulation 1956 1960s 1970s 1980s 1990s 2000s 2010s 2020s Hype level 1956 1960s 1970s 1980s 1990s 2000s 2010s 2020s Hype level 1956 1960s 1970s 1980s 1990s 2000s 2010s 2020s Hype level 1956 1960s 1970s 1980s 1990s 2000s 2010s 2020s Hype level ImageNet Fei-Fei Li 1956 1960s 1970s 1980s 1990s 2000s 2010s 2020s Return of Neural Networks Hype level 1956 1960s 1970s 1980s 1990s 2000s 2010s 2020s ? Hype level 1956 1960s 1970s 1980s 1990s 2000s 2010s 2020s 2022 2022 DALL-E Whisper 2022 2023 Can we encode cognitive theories into computational systems? Cognitive Artificial Science Intelligence Can we analyze AI agents using the tools of Cognitive Science? Linguistic concept of A. That is the narrative we have been sold. GPT-2 “acceptability” – would someone actually B. This is the week you have been dying. BERT say/write this sentence? How does ChatGPT’s cognition compare to human cognition? False-belief task False-belief task: 2022 False-belief task: 2024 No-belief task: 2024 Centration Centration: 2022 Centration: 2024 Model-based reasoning Wason selection task Wason selection task: 2022 Wason selection task: 2024 Alief Alief: 2022 Alief: 2024 Alief Alief:2022 Alief:2024 Similarity to neural responses Performance on Computer Vision benchmark Linsley et al., 2023 Summary There are multiple definitions of what counts as “AI” Multiple approaches to AI have been attempted, with rule- based symbolic systems now largely replaced by connectionist systems trained on large datasets Cognitive Science can give us tools for evaluating or improving the human-ness of AI systems AI and Society Bias “Garbage in, garbage out” (GIGO) 1. Racist chatbots 2. Racist Youtube recommendations 3. Racist prison sentences Bias Viscous feedback loops 1. Identify an area for extra policing 2. More criminal activity found in that area 3. Identify it as in need of still more policing 4. Etc. Misalignment Goal satisfaction A parable “But, say one day we create a super intelligence and we ask it to make as many paper clips as possible. Maybe we built it to run our paper-clip factory. If we were to think through what it would actually mean to configure the universe in a way that maximizes the number of paper clips that exist, you realize that such an AI would have incentives, instrumental reasons, to harm humans. Maybe it would want to get rid of humans, so we don't switch it off, because then there would be fewer paper clips. Human bodies consist of a lot of atoms and they can be used to build more paper clips. If you plug into a super- intelligent machine with almost any goal you can imagine, most would be inconsistent with the survival and flourishing of the human civilization.” - Nick Bostrom Social harms Ways that AI can become dangerous 1. Become super intelligent and outsmart us Social harms Ways that AI can become dangerous 1. Become super intelligent and outsmart us 2. Used for bad purposes 3. Put in charge of important tasks and makes mistakes 4. Employers hire fewer employees

Use Quizgecko on...
Browser
Browser