Lecture 4: A Brief History of AI (COM1005 2023-24)

Summary

This document provides a lecture outline and summary for a course on the history of artificial intelligence (AI), focusing specifically on the Golden Early Years (1956-1969). It covers topics including reasoning as search, machine learning, natural language processing, and microworlds.

Full Transcript

Lecture 4 A Brief History of AI: Golden Early Years and the “First AI Winter” Rob Gaizauskas COM1005 2023-24 Lecture Outline • Historical Overview Precursors (… – 1943) Gestation and Birth (1943 – 1956) Golden Early Years (1956-1969) The First “AI Winter” (1966-73) Rise of Knowledge-based and Exp...

Lecture 4 A Brief History of AI: Golden Early Years and the “First AI Winter” Rob Gaizauskas COM1005 2023-24 Lecture Outline • Historical Overview Precursors (… – 1943) Gestation and Birth (1943 – 1956) Golden Early Years (1956-1969) The First “AI Winter” (1966-73) Rise of Knowledge-based and Expert Systems (1969-1989) New Paradigms: Connectionism; Intelligent Agents; Embodied AI (1986 – present) – Scientific Method, Big Data and Deep Learning (1987 – present) – – – – – – • Reading: * = mandatory – *Russell and Norvig (2021), Chapter 1 “Introduction” – *Wikipedia: History of Artificial Intelligence. http://en.wikipedia.org/wiki History_of_artificial_intelligence COM1005 2023-24 Lecture Outline Today • Historical Overview Precursors (… – 1943) Gestation and Birth (1943 – 1956) Golden Early Years (1956-1969) The First “AI Winter” (1966-73) Rise of Knowledge-based and Expert Systems (1969-1989) New Paradigms: Connectionism; Intelligent Agents; Embodied AI (1986 – present) – Scientific Method, Big Data and Deep Learning (1987 – present) – – – – – – • Reading: * = mandatory – *Russell and Norvig (2022), Chapter 1 “Introduction” – *Wikipedia: History of Artificial Intelligence: https://en.wikipedia.org/wiki/History_of_artificial_intelligence COM1005 2023-24 Golden Early Years (1956-1969) • Period of enthusiastic exploration/discovery of new possibilities for computers • Great optimism – 1958, H. A. Simon + Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem.” – 1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do.” – 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved.” – 1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being.” – Watch: http://watson.latech.edu/book/intelligence/videos/airesearch.mp4 (from CBS “Thinking Machines” (1961) full show at: https://youtu.be/cvOTKFXpvKA) • Lots of funding: – US Advanced Research Projects Agency (ARPA) poured money into labs at MIT, Stanford, CMU COM1005 2023-24 Golden Early Years (1956-1969) • Reasoning as Search – General Problem Solver; Resolution Theorem Proving • Machine Learning – Samuel’s checkers program • NLP – STUDENT; SHRDLU; ELIZA; PARRY • Microworlds • Robotics/Planning – Shakey COM1005 2023-24 Golden Early Years (1956-1969) Reasoning as Search • General Problem Solver (GPS) – Created by Simon and Newell in 1959 – Intended to be a universal problem solver – could in principle solve any formalized symbolic problem • Applied to, e.g., theorem proving, geometry problems, chess – Meant to reason in a human-like way • User defines objects and operations that can be done on them • Uses means-ends analysis to control search – given a current state and a goal state attempts to reduce difference between the two – identifies operations available from current state and their outputs and then creates sub-goals to reduce distance to overall goal as much as possible – First program to separate knowledge of problems (rules presented as inputs) from the strategy of how to solve them (generic problem solving engine) COM1005 2023-24 Golden Early Years (1956-1969) Reasoning as Search • General Problem Solver (cont) – Success of GPS and related programs as models of cognition led Newell and Simon (1976) to formulate the physical symbol system hypothesis: “a physical symbol system has the necessary and sufficient means for general intelligent action” where “A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them (using processes) to produce new expressions.” (https://en.wikipedia.org/wiki/Physical_symbol_system) – Implication is that any intelligent system (human or machine) must operate by manipulating symbols – Hotly disputed – see later; much hangs on what is meant by “symbol” COM1005 2023-24 Golden Early Years (1956-1969) Reasoning as Search • Geometry Theorem Prover – Gelernter (1959) – Proved challenging theorems in geometry • Advice Taker – McCarthy (1958) “Programs with Commonsense” – Like General Problem Solver, used knowledge to search for solutions – But knowledge it used was general knowledge of the world • E.g. axioms could be used to derive a plan to drive to the airport – By separating explicit representation of world knowledge from deductive reasoning engine, system could be adapted to new domain without reprogramming • Supply axioms for new domain as inputs to the reasoner – LISP • Resolution theorem proving – Discovered by J.A. Robinson in 1965 – complete theorem-proving algorithm for 1st order logic – Underlies the Prolog logic programming language (first version 1972) COM1005 2023-24 Golden Early Years (1956-1969) Machine Learning • Samuel’s Checkers Programs – Developed a series of checkers (draughts) programs for IBM’s first commercial scientific computer (701) – Demonstrated on television in 1956, showing potential of computers for non-numerical tasks and of AI – Search space too big to search exhaustively • Introduced core ideas of alpha-beta pruning (heuristically evaluating board positions and only exploring the best) and minimax search strategy – Later versions of the program (1959) learned by e.g. • Re-evaluating the reward function based on professional games input to the system • Playing against itself • First machine learning program – Reached level of respectable amateur – first computer game to do so • In 1994 U. Alberta’s Chinook checker’s program became World Checkers Champion (http://webdocs.cs.ualberta.ca/~chinook/) COM1005 2023-24 Golden Early Years (1956-1969) Natural Language Processing • Unconstrained natural language too difficult so many programs limited to working in limited domains – STUDENT • D. Bobrow (1967) • Designed to read and solve word problems such as those found in high school algebra books. E.g. “If the number of customers Tom gets is twice the square of 20% of the number of advertisements he runs, and the number of advertisements is 45, then what is the numbers of customers Tom gets?” – SHRDLU • T. Winograd (1968-72) • User converses with computer about a simple world consisting of blocks of various shapes and colour – can move objects, name collections and query the state of the "world" COM1005 2023-24 Golden Early Years (1956-1969) Natural Language Processing – SHRDLU • SHRDLU (cont) A sample dialogue U: S: U: S: Pick up a big red block. OK. Grasp the pyramid. I DON’T UNDERSTAND WHICH PYRAMID YOU MEAN. U: Find a block which is taller than the one you are holding and put it in the box. S: BY ’IT’, I ASSUME YOU MEAN "THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING". U: What does the box contain? S: THE BLUE PYRAMID AND THE BLUE BLOCK. U: What is the blue pyramid supported by? S: THE BOX. Hear Winograd talking about SHRDLU: http://watson.latech.edu/book/intelligence/videos/terryshrdlu.mp4 More at http://www.youtube.com/watch?v=bo4RvYJYOzI COM1005 2023-24 Golden Early Years (1956-1969) Natural Language Processing – SHRDLU • SHRDLU gave the illusion of language understanding because: – Its whole world was very simple • could be described by ~50 words: nouns like "block" and "cone", verbs like "place on" and "move to", and adjectives like "big" and "blue". – It had a basic memory to supply context. • E.g. could ask SHRDLU to "put the green cone on the red block" and then "take the cone off"; • "the cone" would be taken to mean the green cone one had just talked about. • could also ask about history, e.g. "did you pick up anything before the cone?” – The “world” (independently of the language understanding component) contained basic physics that would permit, e.g. blocks to be stacked but not pyramids • SHRDLU could answer questions about what was possible and what was not. • E.g. SHRDLU would deduce that blocks could be stacked by looking for examples, but would realize that pyramids couldn't be stacked, after having tried it. – SHRDLU allowed a user to introduce new names for objects or arrangements of them and would remember these names • E.g. one could say "a steeple is a small triangle on top of a tall rectangle"; SHRDLU could then answer questions about steeples in the blocks world, and build new ones. COM1005 2023-24 Golden Early Years (1956-1969) Natural Language Processing – SHRDLU • Why the name? – See http://hci.stanford.edu/winograd/shrdlu/name.html and, for trivia fans, http://vimeo.com/43461598 COM1005 2023-24 Golden Early Years (1956-1969) Natural Language Processing • ELIZA – Written by Joseph Weizenbaum, MIT 1964-66 – Simulates a non-directional therapist encouraging user to talk • Avoids having any model of the world – Uses simple pattern matching techniques to: • Match user utterance against a database of simple responses • Replace words like “my” with “your” in responses, e.g. will when user says “my mother hates me” will respond with “who else in your family hates you?” COM1005 2023-24 Golden Early Years (1956-1969) Natural Language Processing • ELIZA (cont) – First chatterbot/chatbot – “the first time a programmer had attempted such a human-machine interaction with the goal of creating the illusion … of human-human interaction” (http://en.wikipedia.org/wiki/ELIZA) – Surprisingly, many users were duped by the illusion – Prompted Weizenbaum to become a leading critic AI • Wrote “Computer Power and Human Reason: From Judgment to Calculation” • Argues should never allow computers to make important decisions because they will always lack human qualities such as compassion and wisdom. (http://en.wikipedia.org/wiki/Joseph_Weizenbaum) COM1005 2023-24 Golden Early Years (1956-1969) Natural Language Processing – Eliza (U = User; E = Eliza) U: Men are all alike. E: IN WHAT WAY? U: They’re always bugging us about something. E: CAN YOU THINK OF A SPECIFIC EXAMPLE? U: Well my boyfriend made me come here E: YOUR BOYFRIEND MADE YOU COME HERE? U: He says I’m depressed much of the time. E: I’M SORRY TO HEAR YOU ARE DEPRESSED. U: It’s true. I’m unhappy. E: DO YOU THINK THAT COMING HERE WILL HELP YOU NOT TO BE UNHAPPY? U: I need some help, that much seems certain. E: WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP? U: Perhaps I could learn to get along with my mother. E: TELL ME MORE ABOUT YOUR FAMILY. COM1005 2023-24 Golden Early Years (1956-1969) Natural Language Processing – Eliza U: My mother takes care of me. E: WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU? U: My father. E: YOUR FATHER? U: You are like my father in some ways. E: WHAT RESEMBLANCE DO YOU SEE? U: You are not very aggressive but I think you don’t want me to notice that. E: WHAT MAKES YOU THINK I AM NOT VERY AGRESSIVE? ....... Eliza On Line COM1005 2023-24 Golden Early Years (1956-1969) Aside: Parry vs Eliza • Because of the surprising success of Eliza, various chatbots followed during this period • One of the most influential was PARRY, implemented by psychiatrist Kenneth Colby in 1972: – Attempts to simulate a person with paranoid schizophrenia, using a model of concepts, conceptualizations, and beliefs/judgements about these conceptualizations – Also implements a conversational strategy – Considerably more sophisticated than Eliza – See https://en.wikipedia.org/wiki/PARRY • In tests, psychiatrists were unable to distinguish Parry from real paranoid schizophrenics at any better than chance level • Parry and Eliza “met” and had several famous chats … COM1005 2023-24 Golden Early Years (1956-1969) Aside: Parry vs Eliza (cont) Good Evening. Tell me your problems. Please terminate input with a period or a question mark. *People get on my nerves sometimes. I am not sure I understand you fully. *You should pay more attention. Suppose you should pay more attention. *You're entitled to your own opinion. ELIZA What makes you think I am entitled to my own opinion? PARRY *It bothers me just to be around people in general. Please go on. *I went to the track at Bay Meadows a while back. What does that suggest to you? *I went to the races. What else would you like to discuss? *Horseracing interests me. Why do you say that just now? *There's a lot of crooked gambling going on there. I am not sure I understand you fully. (from: “PARRY Encounters the DOCTOR”, https://tools.ietf.org/html/rfc439) COM1005 2023-24 Golden Early Years (1956-1969) Microworlds • STUDENT and SHRDLU were instances in a programme of research directed by Minsky at MIT which he called microworlds – artificially simple situations – Observed that in successful sciences like physics, basic principles often best understood using simplified models like frictionless planes or perfectly rigid bodies • Aside from NLP, other examples included work in computer vision and robotics (also in blocks worlds) • Later “blocks world” became a derogatory term and and a hallmark of the limitations of early AI COM1005 2023-24 Golden Early Years (1956-1969) Robotics/Planning -- SHAKEY • • Developed at Stanford 1966-72 First general-purpose mobile robot to be able to reason about its own actions – could analyze the command and break it down into basic chunks by itself • Project combined research in robotics, computer vision, and natural language processing. – first project that melded logical reasoning and physical action. • Results of the project include – A* search algorithm (efficient search for route finding, graph traversal) – Hough transform (find simple shapes in images – lines, circles, elipses) – Visibility graph method (used in robot motion planning) • Used STRIPS (Stanford Research Institute Problem Solver) to plan – Basis for most planners used today – Given an initial state, goal state, and actions (with associated pre- and post-conditions) develops a plan move from initial state to goal state by achieving the appropriate subgoals • Shakey Video COM1005 2023-24 Golden Early Years (1956-1969) Golden Early Years (1956-1969) – Summary • Lots of disconnected, excited exploration in a new field – Typical approach: build a prototype to explore a problem in a limited domain – Hard to compare results, even within the same sub-area – Lots of hype and fantastic projections – Failure to relate work to existing bodies of relevant knowledge in other areas and to embrace the scientific method • Focus on “weak”, general methods – E.g. GPS, 1st order theorem proving – Growing awareness of inescapable problem of combinatorial explosion/exponential complexity of these methods • Striking successes achieved only within “microworlds” – Could methods be scaled up beyond these? COM1005 2023-24 The First “AI Winter” (1966-73) • By the end of the 60’s it was apparent the exaggerated claims of AI researchers would not be fulfilled • AI went through a period of serious difficulty • Issues were of several sorts: – Problems within the AI research agenda – Withdrawal of funding – Attacks from outside AI COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • Computing power too limited – Both processing speed and memory too restrictive for AI processing of real tasks – E.g. to match edge and motion detection capabilities of human retina in real time would require a general-purpose computer capable of 109 operations/second (1000 MIPS) – Today’s computer vision applications require 104 – 106 MIPS – In 1976 world’s fastest supercomputer (Cray-1) capable of ~80-130 MIPS – at least an order of magnitude too slow COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • Intractability/combinatorial explosion – Better theoretical understanding of algorithmic complexity revealed the classes of problems for which solutions take too long to compute to be practically useful • i.e. those with exponential time complexity – Many of the approaches taken in early AI were precisely of this type (e.g. search for proofs, plans, etc.) • Worked in microworlds where there were few objects/actions • Did not and could not scale up (i.e. not just a question of better faster/bigger hardware) – Same problem in machine evolution/genetic algorithms too • Early belief that by randomly mutating and selecting programs could derive useful/improved programs • No progress in early work, despite 1000s of hours of CPU time • (More success with more modern algorithms) COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • What is the combinatorial explosion? – Suppose you start a search in the initial state – At each step you have k potential next moves – Then, to look n steps ahead you need to examine 1 + k + k2 + … kn states – a number exponential in the number of steps – E.g. in chess, assuming an average of 30 possible moves per board position and looking 40 moves ahead (average game length), this sum is: 1 + 30 + 302 + … 3040 ≈ 10120 (“Shannon number” see http://en.wikipedia.org/wiki/Shannon_number) – Examining 1000 board positions per second, the first move would take ~1090 years – By comparison number of atoms in the universe ≈1080 COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • Lack of commonsense knowledge – Early AI researchers failed to realize the extent to which commonsense/every day knowledge is necessary in many tasks. – E.g. in order to constrain the vast space of possible interpretations of visual or linguistic input, programs for visual object recognition or natural language understanding need to have some idea of what they might be looking at/talking about. • i.e. need to know the sort of things about the world that a child does • soon discovered that this was a truly vast amount of information. • in 1970 not known how to build a database so large or how a program might learn so much information COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • Lack of commonsense knowledge (cont) – Key problem for visual or linguistic interpretation is ambiguity – Most scene elements/speech elements/words have multiple interpretations. E.g. • Visual ambiguity: • Phonological ambiguity: “The annoying salesman tried the doctor’s patience/patients” “I ate (a nice peach)/(an ice beach)” • Word sense ambiguity: “The crane lifted the pallet of bricks” • vs Context/world knowledge let us resolve these effortlessly – ? But how can this knowledge be represented, acquired and used in a computer? COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • Moravec’s Paradox (http://en.wikipedia.org/wiki/Moravec’s_paradox) – Contrary to assumptions of early AI researchers, simulating highlevel reasoning proved relatively easy while simulating low level sensorimotor skills proved extremely hard: “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” – Steven Pinker in “The Language Instinct” (1994): “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-yearold that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.” COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • Crisis of Connectionism – McCulloch and Pitts’ early work on artificial neurons and Hebb’s work on learning for artificial neurons was extended by Frank Rosenblatt’s introduction of the perceptron in 1957 • Predicted “perceptron may eventually be able to learn, make decisions, and translate languages." – Considerable work was done on artificial neural networks consisting of multilayer perceptrons (input, output and at least one hidden layer) during the 1960s COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • Crisis of Connectionism (cont) – But, in 1969 Minsky and Papert published the book Perceptrons • Suggested there were severe limitations to what perceptrons could do and that Rosenblatt's predictions had been grossly exaggerated • Specifically, a single layer perceptron network could not compute the XOR (exclusive OR) function • More generally, single layer perceptrons could only solve linearly separable classification problems jarvmiller.github.io/2017/10/14/neural-nets-pt1/ COM1005 2023-24 http://139.59.164.119/content-https-vitalflux.com/how-know-datalinear-non-linear/ The First “AI Winter” (1966-73) Problems within the AI Research Agenda • Crisis of Connectionism (cont) – But, in 1969 Minsky and Papert published the book Perceptrons (continued) • The book effectively killed research in connectionism for 10 years • Eventually the field revived (mid-80s), then stalled again (90s) • Now, with “deep learning” has become a vital and central part of AI COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • The Frame Problem – AI researchers who used logic for representing knowledge and reasoning about it, e.g. in robot planning, discovered that they needed axioms stating that things that had not been changed by an action remained the same • i.e. actions change specific things; everything else (the “frame”) remains the same – But then everything not affected had to be explicitly stated for each action • Infeasible to do this for complex world models/actions sets – Eventually (by late 1980’s) various solutions (requiring new logics) were found • At the time problem seemed very significant and possibly insurmountable COM1005 2023-24 The First “AI Winter” (1966-73) Problems within the AI Research Agenda • The Qualification Problem – A further problem arose, when using logic-based reasoning for planning – When specifying actions which may form part of a plan, need to specify pre-conditions that must be met for action to be possible • E.g. to use a boat to row across a river requires the presence of oars – However, listing all pre-conditions in all conceivable circumstances impossible • E.g. oars must be present; rowlocks must be present and unbroken; oars must fit rowlocks; …; meteorites must not intervene; etc. – Eventually solutions found using probability theory to summarize exceptions COM1005 2023-24 The First “AI Winter” (1966-73) Withdrawal of Funding • US ALPAC Report 1966 – ALPAC = Automatic Language Processing Advisory Committee – Appointed to evaluate progress in computational linguistics, esp. machine translation (MT) – Very negative assessment led to complete stop in MT funding – Famous example illustrating weaknesses: “the spirit is willing but the flesh is weak” English -> Russian -> English “the vodka is good but the meat is rotten” – Today MT/computer-assisted translation is a multi-billion $ business COM1005 2023-24 The First “AI Winter” (1966-73) Withdrawal of Funding • UK Lighthill Report (1973) – Criticised utter failure of AI to achieve its "grandiose objectives” and was "highly critical of basic research in foundational areas such as robotics and language processing” – Stated that AI researchers had failed to address the issue of combinatorial explosion when solving problems within real world domains. • Claimed AI techniques might work within the scope of small problem domains, but the techniques would not scale up well to solve more realistic problems. – Led to cessation of funding for AI research in the UK • US DARPA cancelled Speech Understanding Research Programme at Carnegie Mellon • US National Research Council cancelled funding … COM1005 2023-24 The First “AI Winter” (1966-73) Attacks from Outside AI • Have already seen critiques of Weizenbaum and Searle • Philosopher J.R. Lucas (1961) raised what was essentially Turing’s anticipated Mathematical Objection: – Gödel’s Incompleteness Theorem (GIT) shows that for any formal axiomatic system F powerful enough to do arithmetic there will be a sentence of F such that the sentence is true but cannot be proved within F, if F is consistent – Then argued that since machines are formal systems there will be sentences which are true which they can never establish, while humans are not subject to such limitations COM1005 2023-24 The First “AI Winter” (1966-73) Attacks from Outside AI • Russell and Norvig (2021, pp. 1034-35) reply with 3 arguments 1. 2. 3. Godel’s argument applies only to systems powerful enough to do arithmetic, such as Turing machines. But computers only approximate TM’s since TM’s are infinite while computers not – just v. large systems in propositional logic and as such not subject to GIT There are sentences that are true a given human cannot consistently assert, but other humans can, e.g. J.R. Lucas cannot consistently assert that this sentence is true This is not viewed as a fundamental limitation to human thinking Even if computers do have limitations on what they can prove, no evidence that humans do not also have these limitations • • • Easy to show rigorously that a formal system cannot do X and then claim humans can do X without giving evidence Cannot prove humans not subject to GIT – to do so would require formalizing the claimed unformalisable human ability Turing’s original response a variant of 3. – Pointed out our superiority would only be wrt a particular machine – other machines might be cleverer again COM1005 2023-24 The First “AI Winter” (1966-73) Attacks from Outside AI • Hubert Dreyfus (1972) “What Computers Can’t Do” and later “What Computers Still Can’t Do” (1992) – Persistently advocated what Turing called “The Argument from Informality” • Human behaviour too complex to be captured by any set of rules • Computers can only follow rules • Therefore, computers cannot generate behaviour as intelligent as humans (related to qualification problem) COM1005 2023-24 The First “AI Winter” (1966-73) Attacks from Outside AI • Can see Dreyfus’s arguments as arguing against a particular sort of AI – Sometimes called GOFAI “Good Old Fashioned AI” – claim that all intelligent behaviour can be modelled by a system that reasons logically from a set of facts and rules – Pointed out such models of intelligence can’t deal with qualification problem, uncertainty, background commonsense knowledge, learning and lack the knowledge that an embodied agent interacting with the world has – But, modern AI has developed paradigms and methods to address these concerns … COM1005 2023-24 The First “AI Winter” (1966-73) – Summary • The over-hyping and excitement relating to AI in the Golden Early Years was followed by a period of slowdown and reflection • Causes for the slowdown included – Problems within AI research agenda: Insufficient computing power Combinatorial explosion Challenge of encoding commonsense knowledge Realisation that modelling some abstract reasoning was easy while modelling simple perceptual/motor skills was hard (Moravec’s Paradox) • Crisis of connectionism (the XOR problem) • The Frame and Qualification problems • • • • – Withdrawal of funding in both the US and the UK – Criticisms from outside AI: Weizenbaum, Searle, Lucas, Dreyfus • Are we in a period of comparable over-hyping of AI now? COM1005 2023-24 References Russell, Stuart and Norvig, Peter (2021) Artificial Intelligence: A Modern Introduction (4th ed). Pearson. Wikipedia: ALPAC. http://en.wikipedia.org/wiki/ALPAC (visited 30/09/22). Wikipedia: Alvey Programme. http://en.wikipedia.org/wiki/Alvey_Programme (visited 30/09/22). Wikipedia: Artificial Intelligence. http://en.wikipedia.org/wiki/Artificial_intelligence (visited 30/09/22). Wikipedia: Artificial Neuron. http://en.wikipedia.org/wiki/Artificial_neuron (visited 30/09/22). Wikipedia: Dartmouth Workshop. https://en.wikipedia.org/wiki/Dartmouth_workshop (visited 30/09/22). Wikipedia: Hubert Dreyfus. http://en.wikipedia.org/wiki/Hubert_Dreyfus (visited 30/09/22). Wikipedia: Eliza. http://en.wikipedia.org/wiki/ELIZA (30/09/22). Wikipedia: Frame Problem. http://en.wikipedia.org/wiki/Frame_problem (visited 30/09/22). Wikipedia: General Problem Solver. http://en.wikipedia.org/wiki/General_Problem_Solver (visited 30/09/22). Wikipedia: History of Artificial Intelligence. http://en.wikipedia.org/wiki/History_of_artificial_intelligence (visited 30/09/22). Wikipedia: History of Robots. http://en.wikipedia.org/wiki/History_of_robots (visited 30/09/22). COM1005 2023-24 References Wikipedia: Lighthill Report. http://en.wikipedia.org/wiki/Lighthill_report (visited 30/09/22). Wikipedia: Means-Ends Analysis. http://en.wikipedia.org/wiki/Means-ends_analysis (visited 30/09/22). Wikipedia. Moravec’s Paradox. http://en.wikipedia.org/wiki/Moravec’s_paradox (visited 30/09/22). Wikipedia. Perceptron. http://en.wikipedia.org/wiki/Perceptron (visited 30/09/22). Wikipedia: Physical Symbol System. http://en.wikipedia.org/wiki/Physical_symbol_system (visited 30/09/22). Wikipedia: Qualification Problem. http://en.wikipedia.org/wiki/Qualification_problem (visited 30/09/22). Wikipedia: Robots. http://en.wikipedia.org/wiki/Robot (visited 30/09/22). Wikipedia: Shakey the Robot. http://en.wikipedia.org/wiki/Shakey_the_robot (visited 30/09/22). Wikipedia: SRHDLU . http://en.wikipedia.org/wiki/SHRDLU (visited 30/09/22). COM1005 2023-24

Use Quizgecko on...
Browser
Browser