Podcast
Questions and Answers
What was the primary goal of the 1956 Dartmouth workshop, considered the birthplace of AI?
What was the primary goal of the 1956 Dartmouth workshop, considered the birthplace of AI?
- To secure funding for advanced computer hardware research.
- To explore the possibility of creating machines capable of intelligent behavior. (correct)
- To standardize computer programming languages.
- To develop practical applications of existing computer technology.
Which of the following best describes the initial reaction to the term 'Artificial Intelligence' as coined by John McCarthy?
Which of the following best describes the initial reaction to the term 'Artificial Intelligence' as coined by John McCarthy?
- It was widely disliked, but McCarthy insisted on using it.
- It was considered acceptable, though not ideal, for lack of a better alternative. (correct)
- It was immediately embraced as the perfect descriptor for the field.
- It sparked a heated debate among researchers, leading to its eventual abandonment.
The Rockefeller Foundation significantly impacted the 1956 Dartmouth workshop by:
The Rockefeller Foundation significantly impacted the 1956 Dartmouth workshop by:
- Providing only half the requested funding, creating logistical challenges for the organizers. (correct)
- Refusing to provide any funding due to skepticism about Artificial Intelligence.
- Dictating the workshop's agenda and participant selection.
- Providing more than the requested amount of funding, ensuring the workshop's success.
Which statement accurately reflects the state of AI in the early 1960s following the Dartmouth workshop?
Which statement accurately reflects the state of AI in the early 1960s following the Dartmouth workshop?
What is suggested by Voltaire's quote, "Define your terms, or we shall never understand one another," in the context of discussing Artificial Intelligence?
What is suggested by Voltaire's quote, "Define your terms, or we shall never understand one another," in the context of discussing Artificial Intelligence?
Why did Marvin Minsky describe 'intelligence' as a "suitcase word?"
Why did Marvin Minsky describe 'intelligence' as a "suitcase word?"
In the context of AI, what is the primary difference between the scientific and practical approaches?
In the context of AI, what is the primary difference between the scientific and practical approaches?
How did the Dartmouth workshop participants' views differ on approaching the development of AI?
How did the Dartmouth workshop participants' views differ on approaching the development of AI?
What is the key characteristic of 'symbolic AI'?
What is the key characteristic of 'symbolic AI'?
Which of the following problems was the General Problem Solver (GPS) designed to solve?
Which of the following problems was the General Problem Solver (GPS) designed to solve?
How does a symbolic AI program, like the General Problem Solver, approach problem-solving?
How does a symbolic AI program, like the General Problem Solver, approach problem-solving?
What is a key limitation of subsymbolic AI, such as perceptrons, compared to symbolic AI?
What is a key limitation of subsymbolic AI, such as perceptrons, compared to symbolic AI?
What inspired the development of perceptrons?
What inspired the development of perceptrons?
How does a perceptron make decisions?
How does a perceptron make decisions?
What is 'supervised learning' in the context of training a perceptron?
What is 'supervised learning' in the context of training a perceptron?
Why was the perceptron considered a subsymbolic approach to AI?
Why was the perceptron considered a subsymbolic approach to AI?
What was the primary contribution of Minsky and Papert's book, Perceptrons?
What was the primary contribution of Minsky and Papert's book, Perceptrons?
What is an 'AI winter'?
What is an 'AI winter'?
How does a multilayer neural network differ from a single-layer perceptron?
How does a multilayer neural network differ from a single-layer perceptron?
What is back-propagation?
What is back-propagation?
Flashcards
The Dream of AI
The Dream of AI
The dream of creating a machine as intelligent as humans, dating back centuries.
Founding of AI
Founding of AI
The official founding of the AI field traced back to a 1956 workshop at Dartmouth College, organized by John McCarthy.
Artificial Intelligence
Artificial Intelligence
A term coined by John McCarthy to differentiate the field from cybernetics.
AI Conjecture
AI Conjecture
Signup and view all the flashcards
Intelligence (Ill-Defined)
Intelligence (Ill-Defined)
Signup and view all the flashcards
AI's Scientific Side
AI's Scientific Side
Signup and view all the flashcards
AI's Practical Side
AI's Practical Side
Signup and view all the flashcards
Artificial intelligence
Artificial intelligence
Signup and view all the flashcards
Symbolic AI
Symbolic AI
Signup and view all the flashcards
General Problem Solver (GPS)
General Problem Solver (GPS)
Signup and view all the flashcards
Subsymbolic AI
Subsymbolic AI
Signup and view all the flashcards
Perceptron
Perceptron
Signup and view all the flashcards
Weight
Weight
Signup and view all the flashcards
Supervised Learning
Supervised Learning
Signup and view all the flashcards
Algorithm
Algorithm
Signup and view all the flashcards
Training Set
Training Set
Signup and view all the flashcards
Test Set
Test Set
Signup and view all the flashcards
Multilayer Neural Network
Multilayer Neural Network
Signup and view all the flashcards
Back-Propagation
Back-Propagation
Signup and view all the flashcards
AI Winter
AI Winter
Signup and view all the flashcards
Study Notes
- Creating an intelligent machine as smart as or smarter than humans is an age-old dream that became modern science with digital computers.
- The rise of computers was influenced by mathematicians' logic-based attempts to understand human thought as mechanical "symbol manipulation."
- Digital computers manipulate symbols like 0 and 1, drawing parallels between computers and the human brain, suggesting intelligence could be replicated in computer programs.
- John McCarthy organized a Dartmouth College workshop in 1956, which is often seen as the official founding of artificial intelligence.
- In 1955, John McCarthy joined Dartmouth's mathematics faculty; he had previously studied psychology and automata theory and was interested in creating a thinking machine.
- At Princeton, McCarthy met Marvin Minsky, who shared his interest in intelligent computers.
- McCarthy collaborated with Claude Shannon (information theory inventor) and Nathaniel Rochester (electrical engineer).
- McCarthy, Minsky, Shannon, and Rochester organized a 2-month study on artificial intelligence at Dartmouth in the summer of 1956.
- McCarthy invented the term "artificial intelligence" to differentiate the field from cybernetics.
- The Rockefeller Foundation received a funding proposal based on the conjecture that any aspect of learning or intelligence can be machine-simulated.
- Topics in the proposal set the agenda for the field still today: natural-language processing, neural networks, machine learning, abstract concepts, reasoning, and creativity.
- Despite computers in 1956 being much slower than smartphones, McCarthy and colleagues were optimistic about AI's progress.
- Obstacles arose such as the Rockefeller Foundation giving only half the funding requested.
- Participants had trouble agreeing.
- The Dartmouth summer of AI named the field and outlined goals, and the "big four" pioneers (McCarthy, Minsky, Allen Newell, and Herbert Simon) met.
- They left with great optimism for the field.
- McCarthy later founded the Stanford Artificial Intelligence Project with the goal of building an intelligent machine within a decade.
- Herbert Simon predicted that machines would be capable of doing any work a man can do within twenty years.
- Marvin Minsky predicted the problems of creating artificial intelligence would be solved within a generation.
- None of the predictions have materialized yet.
- It is unknown how far away we are from constructing a "fully intelligent machine".
- It is not known if reverse engineering the human brain is required, or if algorithms can produce full intelligence.
Defining Intelligence
- Voltaire's "Define your terms" relates to AI due to intelligence being ill-defined.
- Marvin Minsky's term "suitcase word" describes intelligence and related concepts like thinking, cognition, consciousness, and emotion, that are packed with different meanings.
- IQ measures human intelligence on a single scale.
- Intelligence has different dimensions: emotional, verbal, spatial, logical, artistic, social, etc.
- Intelligence can be binary, continuous, or multidimensional.
- AI mainly focuses on scientific and practical efforts, not theoretical distinctions.
- AI researchers study the mechanisms of "natural" (biological) intelligence and try to embed it in computers.
- AI proponents want to create computer programs exceeding human capabilities, regardless of whether these programs think like humans.
- Many AI people joke that their motivations depend on their funding sources.
- AI is defined as "a branch of computer science that studies the properties of intelligence by synthesizing intelligence."
- The lack of a precise AI definition has helped the field grow and advance quickly.
- Al practitioners and researchers are guided by a general sense of direction and an aim to get on with it.
Methods of AI
- At the 1956 Dartmouth workshop, participants advocated different approaches.
- Some promoted mathematical logic and deductive reasoning.
- There were the championing inductive methods using statistics from data and probabilities.
- Others believed in biology and psychology in creating brain-like programs.
- These debates persist today.
- Each approach has created its own principles, techniques, conferences, and journals, with little intercommunication.
- Since the 2010s, deep learning (or deep neural networks) has emerged as the leading AI paradigm.
- The term "artificial intelligence" itself has come to mean "deep learning", which is an unfortunate inaccuracy.
- AI includes approaches with the goal of creating machines with intelligence.
- Deep learning is one such approach.
- Deep learning is a method in machine learning, an AI subfield where machines learn from data or experiences.
- It's important to understand the philosophical split between symbolic and subsymbolic AI.
Symbolic AI
- Symbolic AI uses human-understandable words or phrases ("symbols") with rules for processing to perform tasks.
- The General Problem Solver (GPS) was an early AI program that could solve problems like the "Missionaries and Cannibals" puzzle.
- GPS’s creators, Herbert Simon and Allen Newell, recorded students "thinking out loud" while solving logic puzzles.
- GPS instructions were encoded in a symbolic manner.
- For the "Missionaries and Cannibals" puzzle, the initial state and desired state are described using symbols like LEFT-BANK, RIGHT-BANK, MISSIONARIES, CANNIBALS, and BOAT.
- The program has "operators" to transform states and "rules" to encode task constraints.
- The MOVE operator moves missionaries and cannibals and requires checks to ensure constraints like the maximum boat capacity are met.
- The computer doesn't know what the symbols mean.
- The computer's "meaning" of symbols is how they can be combined, related, and operated on.
- Symbolic AI argues that mimicking the brain is unnecessary, capturing general intelligence is possible using symbol-processing programs.
- These symbol-processing programs would use symbols, their combinations, and rules.
- Symbolic AI was dominant and formed expert systems, medical diagnoses, legal decision-making, and reasoning with common sense.
Subsymbolic AI
- Subsymbolic AI was inspired by neuroscience to capture unconscious thought processes.
- Subsymbolic AI can recognize faces.
- Subsymbolic programs are equations (hard-to-interpret operations on numbers) that learn how to perform a task.
- The perceptron, invented by psychologist Frank Rosenblatt in the late 1950s, was a brain-inspired AI program.
- Rosenblatt's perceptrons were inspired by the way neurons process information.
- A neuron receives electrical or chemical input from other neurons, sums the inputs, and fires if the sum reaches a threshold.
- Neurons have different connection strengths (synapses); stronger connections weigh more in the input sum.
- Adjusting synapse strength drives learning in the brain
- A perceptron is a computer program that simulates neuron information processing with numerical inputs and one output.
- Analogous to a neuron, the perceptron adds up its inputs and outputs either 1 (fires) or 0 (doesn't fire), depending on if the sum exceeds its threshold.
- Numerical weight is assigned to each of a perceptron's inputs, it's multiplied by its weight, then its added to the sum.
- A perceptron's threshold value is set by the programmer.
- A perceptron makes a yes-or-no decision based on whether the weighted sum meets a threshold.
- Rosenblatt proposed networks of perceptrons could perform visual tasks such as recognizing faces and objects.
- A perceptron can be designed to recognize handwritten digits by turning an image into numerical inputs and determining weights/thresholds for the correct output for each digit.
- In this, each pixel in the handwritten number becomes an input for the perceptron.
- Assigned pixel intensity from 0 to 1 serves as the numerical input.
- A perceptron is unlike the General Problem Solver.
- A perceptron lacks explicit rules for tasks, encoding knowledge into weights and threshold values.
- Rosenblatt showed a perceptron could perform perceptual tasks like recognizing digits with the correct weight and threshold values.
- A perceptron must learn values on its own.
- Rosenblatt proposed learning correct values through conditioning, similar to behavioral psychology.
- The perceptron should be trained through positive and negative reinforcement, supervised learning.
Supervised Learning
- During training, the system produces output, and then it receives a "supervision signal" describing how much its output varied from the correct one.
- The system can thus adjust its weights and thresholds
- Supervised learning requires positive examples and negative examples and each example is labeled by a human.
- The training sets trains the systens, and performance is evaluation using the test set.
- The most important term in computer science is algorithm: a "recipe" of steps a computer takes to solve a problem.
- Frank Rosenblatt's primary contribution to AI was the perceptron-learning algorithm to train the weights and thresholds.
- The algorithm initially sets random values between -1 and 1 to the weights and threshold.
- The training process compares the perceptron's output with the correct category label.
- The output is not changed if the perceptron is correct, and threshold and weights are changed making the perceptron's production closer to the correct answer.
- Higher-intensity pixels are most impactful for the amount of weight for an error.
Limitations
- The whole process is repeated for the next training example, modifying the weights and threshold a little bit each time the perceptron makes an error.
- Learning happens gradually.
- A system can settle on a set of weights and thresholds that result in correct answers is possible.
- Performance is evaluated on the test examples.
- It is possible to extend the perceptron to ten outputs (one for each digit).
- Networks of perceptrons could learn to do simple perceptual tasks, according to Rosenblatt.
- The New York Times reported ridiculously optimistic predictions as reported on a press conference Rosenblatt held in July 1958.
- The fact that a perceptron's "knowledge" is a set of is hard to reverse engineer and not symbolic, unlike the General Purpose Solver.
- It is not easy to translate these numbers into rules understandable for humans.
- Our neural firings can be considered subsymbolic.
- Advocates believe that the language-like symbols and the rules for symbol processing cannot be programmed directly.
- Neural-like architectures must be the way that intelligent symbol processing emerges from the brain.
- After the 1956 Dartmouth meeting, the symbolic camp had influence.
- Minsky believed Rosenblatt's brain-inspired approach to AI was a dead end.
- Minsky and Papert mathematically proved that perceptrons are limited.
- A perceptron augmented by adding a "layer” of simulated neurons has broader capabilities, called a multilayer neural network.
- There wasn't a general algorithm, analogous to the perceptron-learning algorithm, for learning weights and thresholds.
- Frank Rosenblatt had recognized the difficulty of training multilayer perceptrons.
- Negative speculations were part of the reason that funding for neural network research dried up in the late 1960s.
- There was a lack of government funding and so research on perceptrons and other subsymbolic AI methods largely halted.
AI Winter
- By the mid-1970s, the more general AI breakthroughs that had been promised had not materialized.
- Two reports reported negatively on progress and prospects for Al research.
- One report acknowledged that “programs written to perform in highly specialised problem domains" showed promise.
- Another concluded that the results to date were "wholly discouraging about general-purpose programs seeking to mimic the problem-solving aspects of human [brain] activity over a rather wide field
- There was a sharp decrease in government funding for Al research.
- The department of Defense drastically cut funding for basic Al research in the United States.
- It was an early example of AI having bubbles and crashes.
- Optimism occurs in the research community during phase 1.
- Results are promised, and often hyped in the news media.
- Funding pours in from government funders and venture capitalists.
- The promised don't occur in phase 2.
- Funding dries up.
- Start-up companies fold, and AI research slows.
- The "AI spring" is followed by overpromising and media hype, followed by an "AI winter."
- Field had garnered such a bad image that it was even advised not to include "artificial intelligence" on job applications.
Current Challenges
- AI was harder than people thought.
- "Easy things are hard."
- Computers that could converse with us in natural language are hard to develop.
- It is more difficult for AI to achieve than diagnosing complex diseases, beating human champions at chess and Go.
- What is easy and obvious to humans is very difficult to replicate/create with AI.
- AI has helped elucidate how complex and subtle are our own minds.
Neural Networks
- Multilayer neural networks have turned out to form the foundation of much of modern artificial intelligence.
- A network is a set of elements that are connected to one another in various ways.
- Neural networks use simulated neurons akin to the perceptrons.
- Figures show a simple multilayer neural network designed to recognize handwritten digits.
- The network has two columns (layers) of perceptron-like simulated neurons (circles).
- Units are instead of simulated neurons.
- Network has a three layer, hidden layers with 3 units and another 10 unit layer.
- Large gray arrows signify that each input has a weighted connection to each hidden unit, and each hidden unit has a weighted connection to each output unit.
- The mysterious-sounding term hidden unit simply means a non-output unit.
- The network shown has hidden and output layers.
- Multilayer network can have multiple layers of hidden units and are called deep networks.
- The "depth" of a network is simply its number of hidden layers.
- Each unit multiplies each of its inputs by the weight on that input's connection and then sums the results.
- Each unit uses its sum to compute a number between 0 and 1 (unit's "activation").
- The network performs its computations layer by layer, from left to right.
- Each hidden unit computes its activation value; these activation values become the inputs for the output units, which then compute their own activations.
- Each output unit corresponds to one of the possible digit categories.
- The activation of an output unit can be thought of as the network's confidence that it is "seeing" the corresponding digit; the digit category with the highest confidence can be taken as the network's answer -its classification.
- Multilayer neural network can learn to use its hidden units to recognize more abstract features.
- Experts use trial and error to find the best settings
Learning via Back-Propagation
- Minsky and Papert were skeptical about the learning the weights in a multilayer neural network.
- Their skepticism was largely responsible for the sharp decrease in funding for neural network research in the 1970s.
- Several groups had rebutted Minsky and Papert's speculations by developing a general learning algorithm-called back-propagation to train networks.
- Back-propagation is a way to take an error observed at the output units and "propagate" the blame for that error backward to assign proper blame to each of the weights in the network.
- This allows back-propagation to determine how much to change each weight in order to reduce the error.
- Learning in neural networks simply consists in gradually modifying the weights on connections so that each output's error gets as close to 0 as possible on all training examples.
- Back-propagation will work no matter how many units your neural network has.
- Neural networks can applied to many tasks, diverse speech recognition, stock-market prediction, language translation, and music composition.
- Connectionist networks were generally referred to as being connected to the idea that knowledge in these networks resides in weighted connections between units.
- Symbolic AI was now appearing to be brittle.
- Symbolic Al was facing another AI winter.
Approaches
- According to connectionism proponents, the key to intelligence was a brain based appropriate computational architecture.
- Rumelhart and others constructed connectionist networks (in software) as models of perception, and language development.
- In 1988, the Defense Advanced Research Projects Agency (DARPA) proclaimed neural networks were important.
- People have debated symbolic and subsymbolic approaches.
- Symbolic systems can be use human-understandable reasoning.
- Subsymbolic systems tend to be hard to interpret with no programming of complex human knowledge or logic into these systems.
- Subsymbolic systems seem better suited to perceptual or motor tasks for which humans can't easily define rules.
- Each of these approaches has had important successes in narrow areas but has serious limitations in achieving the original goals of AI.
- There have been some attempts to construct hybrid systems that integrate subsymbolic and symbolic methods, none have yet led to any striking success.
The ascent of machine learning
- Al researchers developed numerous algorithms for machine learning, leading to it becoming its own independent subdiscipline of AI.
- Machine-learning researchers disparagingly referred to symbolic AI methods as good old-fashioned AI, and roundly rejected them.
- Machine learning had its cycles of optimism, government funding, start-ups, and overpromising.
- Training neural networks and similar methods to solve real-world problems would be slow, and often did not work because of the limited computation power.
- Explosive growth of the internet would see to that.
- New AI revolution.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.