Podcast
Questions and Answers
Which of the following best explains why games are valuable in the field of Artificial Intelligence (AI)?
Which of the following best explains why games are valuable in the field of Artificial Intelligence (AI)?
- They are too complex for humans to understand, thus AI provides the advantage
- They present complex challenges that are hard to solve, encouraging the development of advanced AI techniques. (correct)
- They are simple to solve, providing a basic training ground for AI algorithms.
- They have limited applications and do not require sophisticated algorithms.
Which game was among the first to be computerized as an example of Artificial Intelligence?
Which game was among the first to be computerized as an example of Artificial Intelligence?
- Chess
- Checkers
- Nim (correct)
- Tic-Tac-Toe
Which AI program, developed in the 1960s, introduced key concepts in game playing such as search trees and heuristics?
Which AI program, developed in the 1960s, introduced key concepts in game playing such as search trees and heuristics?
- AlphaGo
- Deep Blue
- Arthur Samuel's Checkers Program (correct)
- Deep Fritz
What term did Arthur Samuel coin in the context of his checkers program, which is now a fundamental concept in AI?
What term did Arthur Samuel coin in the context of his checkers program, which is now a fundamental concept in AI?
What was the significance of the checkers program Chinook in the history of AI game playing?
What was the significance of the checkers program Chinook in the history of AI game playing?
Which of the following describes an 'endgame database' as used by Chinook?
Which of the following describes an 'endgame database' as used by Chinook?
Who proposed chess as an AI problem in his 1949 paper, 'Programming a Computer for Playing Chess'?
Who proposed chess as an AI problem in his 1949 paper, 'Programming a Computer for Playing Chess'?
Which AI program was the first to defeat a human world chess champion in a six-game match?
Which AI program was the first to defeat a human world chess champion in a six-game match?
Which AI program defeated the European Go champion and one of the world's best players in 2016?
Which AI program defeated the European Go champion and one of the world's best players in 2016?
Within the context of AI, what does the 'formulate, search, execute' design philosophy refer to?
Within the context of AI, what does the 'formulate, search, execute' design philosophy refer to?
What is the primary goal of the 8-queens puzzle?
What is the primary goal of the 8-queens puzzle?
Which of the following best describes the 'states' in the context of the 8-queens puzzle?
Which of the following best describes the 'states' in the context of the 8-queens puzzle?
Which of the following defines the 'actions' in the context of the 8-queens puzzle?
Which of the following defines the 'actions' in the context of the 8-queens puzzle?
What is the 'goal test' when solving the 8-queens puzzle?
What is the 'goal test' when solving the 8-queens puzzle?
What is the 'transition model' referring to in the context of the 8-queens puzzle?
What is the 'transition model' referring to in the context of the 8-queens puzzle?
In the context of the 8-puzzle problem, what defines the 'states'?
In the context of the 8-puzzle problem, what defines the 'states'?
For the 8-puzzle problem, what comprises the 'actions'?
For the 8-puzzle problem, what comprises the 'actions'?
In the 8-puzzle problem, what is the role of the Transition Model?
In the 8-puzzle problem, what is the role of the Transition Model?
In the context of the 8-puzzle, what constitutes the Goal Test?
In the context of the 8-puzzle, what constitutes the Goal Test?
In the context of the Travelling Problem, what does the term 'initial state' refer to?
In the context of the Travelling Problem, what does the term 'initial state' refer to?
Regarding intelligent agents, which design philosophy uses 'formulate, search, execute'?
Regarding intelligent agents, which design philosophy uses 'formulate, search, execute'?
In the context of AI search algorithms, what is a 'heuristic function'?
In the context of AI search algorithms, what is a 'heuristic function'?
Which of the following algorithms is used to handle chance nodes?
Which of the following algorithms is used to handle chance nodes?
What defines a zero-sum game in the context of game playing AI?
What defines a zero-sum game in the context of game playing AI?
In game theory and AI, what is referred to as a 'ply'?
In game theory and AI, what is referred to as a 'ply'?
What does 'embedded thinking' refer to in the context of adversarial game playing?
What does 'embedded thinking' refer to in the context of adversarial game playing?
In adversarial search, which of the following describes why the optimal solution differs from a typical search problem?
In adversarial search, which of the following describes why the optimal solution differs from a typical search problem?
Which search algorithm is typically used for the minimax algorithm?
Which search algorithm is typically used for the minimax algorithm?
What is the purpose of the 'terminal test' in the formalization of a game?
What is the purpose of the 'terminal test' in the formalization of a game?
How does the Minimax algorithm operate under the assumption of optimal play?
How does the Minimax algorithm operate under the assumption of optimal play?
What does the Utility Function evaluate?
What does the Utility Function evaluate?
What is the primary goal of Alpha-Beta pruning?
What is the primary goal of Alpha-Beta pruning?
In Alpha-Beta pruning, what does the 'alpha' value represent?
In Alpha-Beta pruning, what does the 'alpha' value represent?
What condition triggers pruning in the Alpha-Beta pruning algorithm?
What condition triggers pruning in the Alpha-Beta pruning algorithm?
In game-playing AI, what is the key approach to handling real-time constraints when using Minimax or Alpha-Beta?
In game-playing AI, what is the key approach to handling real-time constraints when using Minimax or Alpha-Beta?
What is the purpose of an 'evaluation function' in real-time game-playing AI?
What is the purpose of an 'evaluation function' in real-time game-playing AI?
How did Deep Blue's evaluation function improve its chess play?
How did Deep Blue's evaluation function improve its chess play?
What was a key factor that contributed to Deep Blue's ability to defeat Garry Kasparov?
What was a key factor that contributed to Deep Blue's ability to defeat Garry Kasparov?
What is the primary difference between deterministic games and stochastic games?
What is the primary difference between deterministic games and stochastic games?
Which algorithm is a generalized form of the Minimax algorithm designed to handle stochastic games?
Which algorithm is a generalized form of the Minimax algorithm designed to handle stochastic games?
Identify some of the techniques that are effective and can be used practically
Identify some of the techniques that are effective and can be used practically
Flashcards
Games in AI
Games in AI
Games are very relevant in AI, including games like Tic-Tac-Toe and Backgammon.
Nim
Nim
A computerized game created in 1951; it's one of the first examples of AI.
Early AI Game Programs
Early AI Game Programs
Created in 1951 by Christopher Strachey for checkers and Dietrich Prinz for chess.
Arthur Samuel's Checkers Program
Arthur Samuel's Checkers Program
Signup and view all the flashcards
Rote Learning
Rote Learning
Signup and view all the flashcards
Deep Blue
Deep Blue
Signup and view all the flashcards
AlphaGo
AlphaGo
Signup and view all the flashcards
Goal Formulation
Goal Formulation
Signup and view all the flashcards
Problem formulation
Problem formulation
Signup and view all the flashcards
Search
Search
Signup and view all the flashcards
8-Queens puzzle
8-Queens puzzle
Signup and view all the flashcards
8-Queens puzzle State
8-Queens puzzle State
Signup and view all the flashcards
8-Queens Transition Model
8-Queens Transition Model
Signup and view all the flashcards
8-Queens puzzle Goal Test
8-Queens puzzle Goal Test
Signup and view all the flashcards
8-Puzzle state description
8-Puzzle state description
Signup and view all the flashcards
8-Puzzle Actions
8-Puzzle Actions
Signup and view all the flashcards
8-Puzzle Transition Model
8-Puzzle Transition Model
Signup and view all the flashcards
8-Puzzle Goal Test
8-Puzzle Goal Test
Signup and view all the flashcards
Initial State
Initial State
Signup and view all the flashcards
Action(s)
Action(s)
Signup and view all the flashcards
Transition Model
Transition Model
Signup and view all the flashcards
Leaf Node
Leaf Node
Signup and view all the flashcards
Depth
Depth
Signup and view all the flashcards
Breadth
Breadth
Signup and view all the flashcards
Breadth-first search
Breadth-first search
Signup and view all the flashcards
Depth-first search
Depth-first search
Signup and view all the flashcards
Heuristic function
Heuristic function
Signup and view all the flashcards
A* Search
A* Search
Signup and view all the flashcards
Deterministic Games
Deterministic Games
Signup and view all the flashcards
Zero-Sum Games
Zero-Sum Games
Signup and view all the flashcards
Embedded thinking
Embedded thinking
Signup and view all the flashcards
Game
Game
Signup and view all the flashcards
Formalization
Formalization
Signup and view all the flashcards
Minimax Principle
Minimax Principle
Signup and view all the flashcards
Adversarial search
Adversarial search
Signup and view all the flashcards
MAX node Adversarial search
MAX node Adversarial search
Signup and view all the flashcards
MIN node Adversarial search
MIN node Adversarial search
Signup and view all the flashcards
Informed Search Algorithm
Informed Search Algorithm
Signup and view all the flashcards
Alpha-beta pruning parameters
Alpha-beta pruning parameters
Signup and view all the flashcards
Alpha-beta Pruning
Alpha-beta Pruning
Signup and view all the flashcards
Study Notes
Artificial Intelligence: Game Playing
- Games represent a significant area in AI, exemplified by games like Tic-Tac-Toe and backgammon.
- Games are interesting to AI because their solutions can be complex.
- The computerized game of Nim, created in 1951, is an early AI example; play it at "https://www.archimedes-lab.org/game_nim/play_nim_game.html".
- In 1951, utilizing the Ferranti Mark 1, Christopher Strachey developed a program for checkers while Dietrich Prinz created one for chess at the University of Manchester.
- Arthur Samuel's checkers program in the 1960s introduced game-playing concepts: search trees, pruning, and heuristics.
- Arthur Samuel coined the term "machine learning", referring to "rote learning" and "generalization learning."
Game Playing Machines
Checkers
- Chinook ended Marion Tinsley's 40-year reign as human world champion in 1994.
- Chinook used an endgame database to define perfect play for positions involving 8 or fewer pieces, totaling 443,748,401,247 positions.
- Checkers is available at "https://cardgames.io/checkers/".
Chess
- In 1949, Claude E. Shannon suggested chess for the AI community in his paper "Programming a Computer for Playing Chess."
- Deep Blue defeated world champion Gary Kasparov in a six-game match in 1997.
- In 2006, Deep Fritz defeated the undisputed world champion, Vladimir Kramnik, with a score of 4-2.
Deep Blue
- Deep Thought, a chess-playing machine project, began at CMU.
- Deep Blue was the first computer to defeat a chess champion in 1996.
- Kasparov observed intelligence and creativity in Deep Blue's moves.
Go
- AlphaGo, developed by Google DeepMind Project, exceeded b > 300!
- In 2016, AlphaGo beat European Go champion Fan Hui and world's best player Lee Sedol.
Game Playing: Searching Solutions
- Goal formulation, based on the current situation and the agent's performance, is the initial step in problem-solving.
- Problem formulation is determining which actions and states to consider for a goal.
- Search denotes the process seeking a sequence of actions that lead to the goal.
- A search algorithm uses a problem as input, returning a solution as an action sequence; the execution phase happens once a solution has been found.
- Intelligent agents follow "formulate, search, execute" design principles.
Game Playing: 8 Queens Puzzle
- The goal is to arrange eight queens on a chessboard such that no queen attacks any other.
- Each state represents an arrangement of 0-8 queens.
- The initial state involves no queens on the board.
- Actions consist of adding a queen to any empty square.
- The transition model returns the board with a specified queen.
- The goal test means having eight queens on the board where none can attack each other.
- Possible sequences to investigate are around 1.8 x 10^14 (64 * 63 * ... * 57).
8 Puzzle Problem
- Any state can be designated as the initial state.
- Half of the possible initial states can reach any given goal.
- The simplest formulation defines actions as blank space movements: Left, Right, Up, or Down.
- The transition model returns the resulting state by applying a state and action.
- The goal test verifies the state matches the goal.
- Each step has a cost of 1, with the path cost referring to the number of steps on the path.
Travelling Problem
- initial state that the agent starts: In(Arad)
- ACTIONS(s) returns the set of actions From In(Arad), the applicable i.e {Go(Sibiu), Go(Timisoara), Go(Zerind)}.
- TRANSITION MODEL model, specified by a function RESULT(s, a) that returns the state that results from action a in state
- GOAL STATE {In(Bucharest )}.
- PATH COST cost function that assigns a numeric cost to each path.
Tree Terminology
- Node: Denotes some state/value.
- Root: The top node in a tree, the prime ancestor.
- Child: A node directly connected to another node when moving away from the root, an immediate descendant.
- Parent: The converse notion of a child, an immediate ancestor.
- Leaf: A node with no children.
- Internal node: A node with at least one child.
- Edge: The connection between one node and another.
- Depth: The distance between a node and the root.
- Level: The number of edges between a node and the root + 1.
- Height: The number of edges on the longest path between a node and a descendant leaf.
- Breadth: The number of leaves.
- Sub Tree: A tree T is a tree consisting of a node in T and all of its descendants in T.
- Binary Tree: A tree data structure in which each node has at most two children, referred to as the left child and the right child. Branching Factor: The maximum number of branches for any node.
Breadth-First Search
- Breadth-first search requires more memory than execution time.
- Memory use becomes a bigger problem in breadth-first search than execution time.
- Exponential-complexity search problems cannot be solved by uninformed methods, except for the smallest instances.
Depth-First Search
- Depth-first search requires storage of only O(bd) nodes.
- At depth d = 16, depth-first search requires 156 kilobytes, 7 trillion times less space than the 10 exabytes required for breadth-first search.
Informed Search: Best-First Search
- A node is selected for expansion based on an evaluation function.
- The evaluation function uses a heuristic function with estimated cost.
- The estimated cost of the cheapest path is available for nodes n at a goal state.
- The "straight-line distance heuristic" is used to estimate the cost from Arad to Bucharest by using the straight-line distance from Arad to Bucharest.
Informed Search: A* Search
- Nodes are evaluated by combining g(n) and h(n).
- g(n) is the cost to reach the node.
- h(n) is the cost to get from the node to the goal.
- f(n) = g(n) + h(n)
- f(n) is the estimated cost of the cheapest solution through n.
- The node with the lowest g(n) + h(n) is tried first.
Types of Games
- Deterministic games include chess, checkers, Go, and Othello.
- Chance games include backgammon and monopoly.
- Imperfect information deterministic games are battleships and blind tic-tac-toe.
- Imperfect information chance games are bridge, poker, scrabble, and nuclear war.
- There's particular interest in deterministic, fully observable, zero-sum games, where two agents act.
Zero-Sum Games
- Adversarial: Pure competition
- Agents have different values on the outcomes.
- One agent maximizes a single value, and the other minimizes it.
- Each move by one player is called a "ply".
Embedded Thinking
- One agent tries to decide what to do, thinking about possible actions' consequences.
- Agents must consider their opponent's actions as well.
- Players imagine the opponent's response to their actions.
- Embedded thinking entails.
Adversarial Search
- There is an opponent preventing optimal planning.
- The optimal solution is a strategy, not a sequence of actions.
- Strategy: If the opponent does a, the agent does b; otherwise, if the opponent does c, the agent does d.
- Model games as search problems and use heuristic evaluation functions.
Single Player
- A single player with three moves is assumed in a tic-tac-toe game.
- Max chooses the path that leads to his win.
- Another player would do everything to make Max lose.
Adversarial Search
- The minimax involves two players: Max and Min.
- The players take turns.
- Max moves first.
- Max maximizes results.
- Min minimizes the results.
- Each node's minimax value is computed for achiveable utility against an optimal adversary.
- Minimax value: best achievable payoff against best play.
Formalization of Game Playing
- Initial state defines the game starting point.
- Player(s) designate which player moves for a given turn.
- Actions(s) defines legal moves in state s.
- Transition function defines the result of each move (S*A->S)
- Terminal test identifies when the game is over.
- Terminal states indicate states where a game ends.
- Utility(s,p): game utility function showing s terminal states for player p.
- Chess has +1, 0, 1/2 values for win, loss or draw, respectively.
- Tic-tac-toe has +1, -1, 0 values for utilty.
Adversarial Search: Minimax
- The optimal strategy for Max is to use the depth-first search of the game tree.
- A leaf node must appear at any tree level.
Minimax Principle
- Compute the utility of being in a state assuming that both players play optimally all the way until the end of the game.
- Propagate minimax values up the tree as soon as terminal nodes are discovered.
- For terminal nodes, value is utility(state).
- If the state is MAX, node value will be the highest value of all of the children.
- If that state is MIN, node value will be set to the lowest value for the children.
Minimax Algorithm
- The algorithm is optimal and complete for finite trees.
- Depth-first search has O(bd) time and space.
Tic-Tac-Toe
- Five legal moves occur on average, totaling nine moves (9 plies).
- There is 1,953,125 5^9, and 362,880 (!9) nodes.
Chess
- Has a b
35 average branching factor, d100 depth of game tree for a typical game, and bd35^10010154 nodes.
Go
- Branching factor starts at 361 (19X19 board).
Multiplayer Games
- The game structure and approach differ from two-player games, with complexities arising from multiple decision-makers and alliances.
Case of Limited Resources
- In real-time games, search leaves can't be searched.
- Minimax can be searched to some depth practically.
- Prune large chunks of the tree for more plies make a big difference.
- Solution: Replace terminal utilties with eval(s) for evaluation function for non-terminal positions and eliminate large parts of the tree via pruning.
Alpha-Beta Pruning
- Just like minimax, pruning performs a depth-first search.
- Pruning requires keeping track of an alpha and beta.
- Alpha is the largest value for Max across children, representing a lower bound on Max's outcome.
- Beta is the lowest value for MIN across explored children, signifying an upper bound on MIN’s outcome.
Propagation and Pruning
- Send alpha and beta metrics down for search pruning.
- Update terminal nodes.
- Update Alpha only at Max nodes and update Beta only at Min nodes.
- Prune any branches at alpha 2 is applied
- Apply domain knowledge to craft useful features.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.