The Ultimate AI History Quiz



9 Questions

What was the name of the workshop that is considered the birthplace of AI?

What is the vanishing gradient problem in recurrent neural networks?

What is the name of the humanoid robot built by Waseda University in 1972?

What was the name of the chess-playing system that beat a reigning world chess champion in 1997?

What is the name of the book that led to a halt in research into neural nets or connectionism for ten years?

What is the name of the project funded by the Japanese government that aimed to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings?

What was the reason for the first AI winter?

What is the name of the program created by Herbert A. Simon that proved 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica?

What is the guiding faith of AI research in the 20th century?


A Brief History of Artificial Intelligence

  • AI's origins date back to ancient myths and legends, with stories of artificially intelligent beings.

  • Philosophers in antiquity attempted to describe human thinking as the mechanical manipulation of symbols.

  • The field of AI research was founded in 1956 at a workshop at Dartmouth College, with scientists predicting the creation of an electronic brain within a generation.

  • However, funding was stopped in 1974 due to criticism and an "AI winter" followed.

  • Interest was renewed in the 21st century with machine learning and the application of powerful computer hardware.

  • Realistic humanoid automata were built by craftsmen from every civilization, including Yan Shi, Hero of Alexandria, Al-Jazari, Pierre Jaquet-Droz, and Wolfgang von Kempelen.

  • The study of mechanical reasoning has a long history, with Chinese, Indian, and Greek philosophers developing structured methods of formal deduction.

  • The physical symbol system hypothesis became the guiding faith of AI research in the 20th century.

  • Calculating machines were built in antiquity and improved throughout history by many mathematicians, including philosopher Gottfried Leibniz.

  • The first modern computers were the massive code-breaking machines of the Second World War, based on the theoretical foundation laid by Alan Turing and developed by John von Neumann.

  • In the 1940s and 50s, a handful of scientists from various fields began discussing the possibility of creating an artificial brain.

  • AI's progress was measured through game AI, with Christopher Strachey writing a checkers program and Arthur Samuel's checkers program achieving sufficient skill to challenge an amateur.A Brief History of Artificial Intelligence

  • The Logic Theorist program, created by Herbert A. Simon, proved 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica and found new and more elegant proofs for some.

  • The Dartmouth Workshop of 1956, organized by Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester, is widely considered the birth of AI.

  • The workshop attendees debuted the "Logic Theorist," and McCarthy persuaded them to accept "Artificial Intelligence" as the name of the field.

  • The first generation of AI researchers expressed intense optimism, predicting that a fully intelligent machine would be built in less than 20 years.

  • Many early AI programs used the "reasoning as search" algorithm, which proceeded step by step towards a goal by making a move or deduction.

  • AI researchers aimed to allow computers to communicate in natural languages like English, and early successes included the STUDENT program and semantic nets.

  • Marvin Minsky and Seymour Papert proposed that AI research should focus on artificially simple situations known as micro-worlds, leading to innovative work in machine vision.

  • Waseda University initiated the WABOT project in 1967 and completed the WABOT-1 humanoid robot in 1972.

  • Critiques of AI researchers' claims were made by several philosophers, including John Lucas, Hubert Dreyfus, and John Searle.

  • The agencies that funded AI research became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI.

  • The publication of Minsky and Papert's 1969 book "Perceptrons" led to a halt in research into neural nets or connectionism for ten years.

  • Logic and symbolic reasoning were introduced into AI research early, but critics noted that humans rarely use logic when they solve problems, and experiments by psychologists provided proof.

  • Rules continued to be influential, providing a foundation for expert systems and continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.A Brief History of Artificial Intelligence

  • McCarthy, Minsky, Papert, and Schank were key figures in AI research in the 60s and 70s.

  • The concept of "frames" was developed to capture all our common sense assumptions about something.

  • Expert systems became popular in the 80s and were used by corporations worldwide to answer questions about specific domains of knowledge.

  • The power of expert systems came from the expert knowledge they contained, leading to the focus on knowledge-based systems and knowledge engineering.

  • The Japanese government funded the Fifth Generation computer project, which aimed to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.

  • The collapse of the market for specialized AI hardware in 1987 led to the first AI winter, which was a period of financial setbacks for AI.

  • Robotics developers Rodney Brooks and Hans Moravec advocated for a new approach to AI based on robotics, arguing that machines need to have a body to show real intelligence.

  • Intelligent agents became widely accepted in the 90s, defining AI research as "the study of intelligent agents."

  • Deep Blue became the first computer chess-playing system to beat a reigning world chess champion in 1997.

  • The concept of raw computer power being a fundamental problem was slowly being overcome due to the increase in speed and capacity of computers.

  • The intelligent agent paradigm gave researchers license to study isolated problems and find useful solutions while providing a common language to describe problems and solutions.

  • AI has achieved some of its oldest goals, but it remains fragmented into competing subfields focused on particular problems or approaches.The Evolution of Artificial Intelligence

  • AI researchers started using sophisticated mathematical tools and collaborating with other fields like mathematics, electrical engineering, economics, or operations research, making AI a more rigorous "scientific" discipline.

  • Probabilistic reasoning was introduced into AI by Judea Pearl's influential 1988 book, and new mathematical tools like Bayesian networks, hidden Markov models, and stochastic modeling were developed.

  • Algorithms initially developed by AI researchers became part of larger systems, and AI solutions proved useful in various industries like data mining, speech recognition, or medical diagnosis.

  • AI's greatest innovations have been reduced to just another tool in computer science, and many researchers started calling their work by other names like knowledge-based systems or computational intelligence.

  • In the early 21st century, big data, cheaper and faster computers, and advanced machine learning techniques were successfully applied to many problems throughout the economy.

  • The market for AI-related products, hardware, and software reached over 8 billion dollars in 2016, and interest in AI reached a "frenzy."

  • Advances in deep learning drove progress in image and video processing, text analysis, and speech recognition, and deep neural networks can realistically generate much more complex models than shallow networks.

  • Recurrent neural networks have the vanishing gradient problem, where gradients passed between layers gradually shrink and disappear, but methods like long short-term memory units can approach this problem.

  • Big data refers to a massive amount of decision-making, insight, and process optimization capabilities that require new processing models.

  • Artificial general intelligence (AGI) is a program that can apply intelligence to a wide variety of problems, and foundation models began to be developed in 2018.

  • GPT-3 and Gato are large artificial intelligence models that can be adapted to a wide range of downstream tasks, and Microsoft Research concluded that GPT-4 could reasonably be viewed as an early version of an AGI system.

  • The expectations of AI have been high, and while AI has made significant progress in many areas, it has not yet reached the level of intelligence imagined by science fiction writers like Arthur C. Clarke and Stanley Kubrick.


How much do you know about the history and evolution of Artificial Intelligence (AI)? Test your knowledge with our quiz, which covers the origins of AI in ancient myths and legends, its development in the 20th century, the rise and fall of expert systems and connectionism, and the latest breakthroughs in big data, machine learning, and deep neural networks. We've included fascinating facts and figures, key players and events, and controversies and challenges in AI research. Challenge yourself and see how much you

Ready to take the quiz?

Play Quiz