AI: Game Playing

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

Which of the following best explains why games are valuable in the field of Artificial Intelligence (AI)?

  • They are too complex for humans to understand, thus AI provides the advantage
  • They present complex challenges that are hard to solve, encouraging the development of advanced AI techniques. (correct)
  • They are simple to solve, providing a basic training ground for AI algorithms.
  • They have limited applications and do not require sophisticated algorithms.

Which game was among the first to be computerized as an example of Artificial Intelligence?

  • Chess
  • Checkers
  • Nim (correct)
  • Tic-Tac-Toe

Which AI program, developed in the 1960s, introduced key concepts in game playing such as search trees and heuristics?

  • AlphaGo
  • Deep Blue
  • Arthur Samuel's Checkers Program (correct)
  • Deep Fritz

What term did Arthur Samuel coin in the context of his checkers program, which is now a fundamental concept in AI?

<p>Machine Learning (A)</p> Signup and view all the answers

What was the significance of the checkers program Chinook in the history of AI game playing?

<p>It was the first AI to defeat a human world champion in checkers. (D)</p> Signup and view all the answers

Which of the following describes an 'endgame database' as used by Chinook?

<p>A record of perfect play for all positions with a limited number of pieces. (B)</p> Signup and view all the answers

Who proposed chess as an AI problem in his 1949 paper, 'Programming a Computer for Playing Chess'?

<p>Claude E. Shannon (D)</p> Signup and view all the answers

Which AI program was the first to defeat a human world chess champion in a six-game match?

<p>Deep Blue (B)</p> Signup and view all the answers

Which AI program defeated the European Go champion and one of the world's best players in 2016?

<p>AlphaGo (B)</p> Signup and view all the answers

Within the context of AI, what does the 'formulate, search, execute' design philosophy refer to?

<p>The steps an intelligent agent uses to solve problems: defining the problem, finding a solution, and implementing it. (C)</p> Signup and view all the answers

What is the primary goal of the 8-queens puzzle?

<p>To arrange eight queens on a chessboard so that none of them attack each other. (A)</p> Signup and view all the answers

Which of the following best describes the 'states' in the context of the 8-queens puzzle?

<p>The possible arrangements of 0 to 8 queens on the board. (B)</p> Signup and view all the answers

Which of the following defines the 'actions' in the context of the 8-queens puzzle?

<p>Adding a queen to any empty square. (B)</p> Signup and view all the answers

What is the 'goal test' when solving the 8-queens puzzle?

<p>Ensuring that all eight queens are placed on the board and none attack each other. (C)</p> Signup and view all the answers

What is the 'transition model' referring to in the context of the 8-queens puzzle?

<p>A function that returns the new board state after adding a queen. (A)</p> Signup and view all the answers

In the context of the 8-puzzle problem, what defines the 'states'?

<p>The specific arrangement of tiles in the puzzle. (A)</p> Signup and view all the answers

For the 8-puzzle problem, what comprises the 'actions'?

<p>Moving a tile into the blank space. (C)</p> Signup and view all the answers

In the 8-puzzle problem, what is the role of the Transition Model?

<p>It returns the new state after an action has been applied. (C)</p> Signup and view all the answers

In the context of the 8-puzzle, what constitutes the Goal Test?

<p>Matching a predefined, solved configuration of tiles. (D)</p> Signup and view all the answers

In the context of the Travelling Problem, what does the term 'initial state' refer to?

<p>The starting city. (C)</p> Signup and view all the answers

Regarding intelligent agents, which design philosophy uses 'formulate, search, execute'?

<p>Formulate a goal, find steps to achieve and execute them. (C)</p> Signup and view all the answers

In the context of AI search algorithms, what is a 'heuristic function'?

<p>A function that estimates the cost of reaching the goal from a given state. (D)</p> Signup and view all the answers

Which of the following algorithms is used to handle chance nodes?

<p>Expectiminimax (D)</p> Signup and view all the answers

What defines a zero-sum game in the context of game playing AI?

<p>A game where one agent's gain is equivalent to another agent's loss. (C)</p> Signup and view all the answers

In game theory and AI, what is referred to as a 'ply'?

<p>A move made by one of the players. (A)</p> Signup and view all the answers

What does 'embedded thinking' refer to in the context of adversarial game playing?

<p>The process of predicting an opponent's responses to one's own actions. (C)</p> Signup and view all the answers

In adversarial search, which of the following describes why the optimal solution differs from a typical search problem?

<p>The best strategy covers all the opponents possible strategies. (A)</p> Signup and view all the answers

Which search algorithm is typically used for the minimax algorithm?

<p>Depth-first search (DFS) (D)</p> Signup and view all the answers

What is the purpose of the 'terminal test' in the formalization of a game?

<p>To check whether the game is over. (A)</p> Signup and view all the answers

How does the Minimax algorithm operate under the assumption of optimal play?

<p>It assumes one player seeks to maximize their score, while the other seeks to minimize it. (A)</p> Signup and view all the answers

What does the Utility Function evaluate?

<p>The moves in Terminal states for players. (C)</p> Signup and view all the answers

What is the primary goal of Alpha-Beta pruning?

<p>To reduce the search space by eliminating branches that are not optimal. (B)</p> Signup and view all the answers

In Alpha-Beta pruning, what does the 'alpha' value represent?

<p>The lowest score that the maximizing player is assured of. (A)</p> Signup and view all the answers

What condition triggers pruning in the Alpha-Beta pruning algorithm?

<p>When alpha is greater than or equal to beta. (B)</p> Signup and view all the answers

In game-playing AI, what is the key approach to handling real-time constraints when using Minimax or Alpha-Beta?

<p>Replacing terminal utilities with an evaluation function and bounding the search depth. (B)</p> Signup and view all the answers

What is the purpose of an 'evaluation function' in real-time game-playing AI?

<p>To assess the likelihood of winning from a given board configuration without completing the game. (B)</p> Signup and view all the answers

How did Deep Blue's evaluation function improve its chess play?

<p>By splitting into 8000 parts each designed for special positions. (B)</p> Signup and view all the answers

What was a key factor that contributed to Deep Blue's ability to defeat Garry Kasparov?

<p>Massive computational power that allowed it to search an enormous number of positions (C)</p> Signup and view all the answers

What is the primary difference between deterministic games and stochastic games?

<p>Stochastic games include random elements, while deterministic games do not. (A)</p> Signup and view all the answers

Which algorithm is a generalized form of the Minimax algorithm designed to handle stochastic games?

<p>Expectiminimax (D)</p> Signup and view all the answers

Identify some of the techniques that are effective and can be used practically

<p>Pruning heuristics and ordering nodes (C)</p> Signup and view all the answers

Flashcards

Games in AI

Games are very relevant in AI, including games like Tic-Tac-Toe and Backgammon.

Nim

A computerized game created in 1951; it's one of the first examples of AI.

Early AI Game Programs

Created in 1951 by Christopher Strachey for checkers and Dietrich Prinz for chess.

Arthur Samuel's Checkers Program

Introduced in the 1960s, it included concepts like search trees, pruning, and heuristics.

Signup and view all the flashcards

Rote Learning

Machine's ability to learn through generalization.

Signup and view all the flashcards

Deep Blue

First computer to defeat a chess champion in 1996.

Signup and view all the flashcards

AlphaGo

DeepMind project that beat both Fan Hui and Lee Sedol at Go in 2016.

Signup and view all the flashcards

Goal Formulation

Based on the current situation and the agent's performance measure

Signup and view all the flashcards

Problem formulation

The process of deciding actions and states, given a goal

Signup and view all the flashcards

Search

Looking for a sequence of actions that reaches the goal

Signup and view all the flashcards

8-Queens puzzle

A problem with the goal of placing eight queens on a chessboard so they don't attack each other.

Signup and view all the flashcards

8-Queens puzzle State

Any arrangement of 0-8 queens on the board

Signup and view all the flashcards

8-Queens Transition Model

Returns the board after adding a queen

Signup and view all the flashcards

8-Queens puzzle Goal Test

Eight queens are on the board and none attacks.

Signup and view all the flashcards

8-Puzzle state description

Describes the location of tiles and the blank space

Signup and view all the flashcards

8-Puzzle Actions

Movements of the blank space (left, right, up, down)

Signup and view all the flashcards

8-Puzzle Transition Model

Given an action, this returns the resulting state.

Signup and view all the flashcards

8-Puzzle Goal Test

Check if the state matches the goal configuration.

Signup and view all the flashcards

Initial State

The starting point in solving a problem

Signup and view all the flashcards

Action(s)

Returns possible actions from a given state

Signup and view all the flashcards

Transition Model

Returns the state that results from an action

Signup and view all the flashcards

Leaf Node

A node with no children.

Signup and view all the flashcards

Depth

Distance between a node and the root node.

Signup and view all the flashcards

Breadth

The number of leaves in the tree.

Signup and view all the flashcards

Breadth-first search

Visits nodes level by level.

Signup and view all the flashcards

Depth-first search

Visits deeply before exploring neighbors.

Signup and view all the flashcards

Heuristic function

Estimates cost from a state to the goal.

Signup and view all the flashcards

A* Search

Algorithm combining cost to reach the node and cost to the goal.

Signup and view all the flashcards

Deterministic Games

Deterministics games are fully observable where two agents act alternately

Signup and view all the flashcards

Zero-Sum Games

One-agent maximizes while the other minimizes it

Signup and view all the flashcards

Embedded thinking

One agent is trying to figure out what to do.

Signup and view all the flashcards

Game

Optimal solution is not a sequence of actions, but a strategy

Signup and view all the flashcards

Formalization

Utility(s) is a function or objective function for a game

Signup and view all the flashcards

Minimax Principle

Compute the utility of being in a state assuming both players play optimally

Signup and view all the flashcards

Adversarial search

Find the optimal strategy for Max

Signup and view all the flashcards

MAX node Adversarial search

Value is highest value of all successor node values (children)

Signup and view all the flashcards

MIN node Adversarial search

Value is lowest value of all successor node values (children)

Signup and view all the flashcards

Informed Search Algorithm

A node is selected for expansion based on an evaluation function

Signup and view all the flashcards

Alpha-beta pruning parameters

alpha: largest value for Max across seen children, Beta: lowest value for Min across seen children

Signup and view all the flashcards

Alpha-beta Pruning

A technique to reduce the number of nodes evaluated in the minimax algorithm.

Signup and view all the flashcards

Study Notes

Artificial Intelligence: Game Playing

  • Games represent a significant area in AI, exemplified by games like Tic-Tac-Toe and backgammon.
  • Games are interesting to AI because their solutions can be complex.
  • The computerized game of Nim, created in 1951, is an early AI example; play it at "https://www.archimedes-lab.org/game_nim/play_nim_game.html".
  • In 1951, utilizing the Ferranti Mark 1, Christopher Strachey developed a program for checkers while Dietrich Prinz created one for chess at the University of Manchester.
  • Arthur Samuel's checkers program in the 1960s introduced game-playing concepts: search trees, pruning, and heuristics.
  • Arthur Samuel coined the term "machine learning", referring to "rote learning" and "generalization learning."

Game Playing Machines

Checkers

  • Chinook ended Marion Tinsley's 40-year reign as human world champion in 1994.
  • Chinook used an endgame database to define perfect play for positions involving 8 or fewer pieces, totaling 443,748,401,247 positions.
  • Checkers is available at "https://cardgames.io/checkers/".

Chess

  • In 1949, Claude E. Shannon suggested chess for the AI community in his paper "Programming a Computer for Playing Chess."
  • Deep Blue defeated world champion Gary Kasparov in a six-game match in 1997.
  • In 2006, Deep Fritz defeated the undisputed world champion, Vladimir Kramnik, with a score of 4-2.

Deep Blue

  • Deep Thought, a chess-playing machine project, began at CMU.
  • Deep Blue was the first computer to defeat a chess champion in 1996.
  • Kasparov observed intelligence and creativity in Deep Blue's moves.

Go

  • AlphaGo, developed by Google DeepMind Project, exceeded b > 300!
  • In 2016, AlphaGo beat European Go champion Fan Hui and world's best player Lee Sedol.

Game Playing: Searching Solutions

  • Goal formulation, based on the current situation and the agent's performance, is the initial step in problem-solving.
  • Problem formulation is determining which actions and states to consider for a goal.
  • Search denotes the process seeking a sequence of actions that lead to the goal.
  • A search algorithm uses a problem as input, returning a solution as an action sequence; the execution phase happens once a solution has been found.
  • Intelligent agents follow "formulate, search, execute" design principles.

Game Playing: 8 Queens Puzzle

  • The goal is to arrange eight queens on a chessboard such that no queen attacks any other.
  • Each state represents an arrangement of 0-8 queens.
  • The initial state involves no queens on the board.
  • Actions consist of adding a queen to any empty square.
  • The transition model returns the board with a specified queen.
  • The goal test means having eight queens on the board where none can attack each other.
  • Possible sequences to investigate are around 1.8 x 10^14 (64 * 63 * ... * 57).

8 Puzzle Problem

  • Any state can be designated as the initial state.
  • Half of the possible initial states can reach any given goal.
  • The simplest formulation defines actions as blank space movements: Left, Right, Up, or Down.
  • The transition model returns the resulting state by applying a state and action.
  • The goal test verifies the state matches the goal.
  • Each step has a cost of 1, with the path cost referring to the number of steps on the path.

Travelling Problem

  • initial state that the agent starts: In(Arad)
  • ACTIONS(s) returns the set of actions From In(Arad), the applicable i.e {Go(Sibiu), Go(Timisoara), Go(Zerind)}.
  • TRANSITION MODEL model, specified by a function RESULT(s, a) that returns the state that results from action a in state
  • GOAL STATE {In(Bucharest )}.
  • PATH COST cost function that assigns a numeric cost to each path.

Tree Terminology

  • Node: Denotes some state/value.
  • Root: The top node in a tree, the prime ancestor.
  • Child: A node directly connected to another node when moving away from the root, an immediate descendant.
  • Parent: The converse notion of a child, an immediate ancestor.
  • Leaf: A node with no children.
  • Internal node: A node with at least one child.
  • Edge: The connection between one node and another.
  • Depth: The distance between a node and the root.
  • Level: The number of edges between a node and the root + 1.
  • Height: The number of edges on the longest path between a node and a descendant leaf.
  • Breadth: The number of leaves.
  • Sub Tree: A tree T is a tree consisting of a node in T and all of its descendants in T.
  • Binary Tree: A tree data structure in which each node has at most two children, referred to as the left child and the right child. Branching Factor: The maximum number of branches for any node.
  • Breadth-first search requires more memory than execution time.
  • Memory use becomes a bigger problem in breadth-first search than execution time.
  • Exponential-complexity search problems cannot be solved by uninformed methods, except for the smallest instances.
  • Depth-first search requires storage of only O(bd) nodes.
  • At depth d = 16, depth-first search requires 156 kilobytes, 7 trillion times less space than the 10 exabytes required for breadth-first search.
  • A node is selected for expansion based on an evaluation function.
  • The evaluation function uses a heuristic function with estimated cost.
  • The estimated cost of the cheapest path is available for nodes n at a goal state.
  • The "straight-line distance heuristic" is used to estimate the cost from Arad to Bucharest by using the straight-line distance from Arad to Bucharest.
  • Nodes are evaluated by combining g(n) and h(n).
  • g(n) is the cost to reach the node.
  • h(n) is the cost to get from the node to the goal.
  • f(n) = g(n) + h(n)
  • f(n) is the estimated cost of the cheapest solution through n.
  • The node with the lowest g(n) + h(n) is tried first.

Types of Games

  • Deterministic games include chess, checkers, Go, and Othello.
  • Chance games include backgammon and monopoly.
  • Imperfect information deterministic games are battleships and blind tic-tac-toe.
  • Imperfect information chance games are bridge, poker, scrabble, and nuclear war.
  • There's particular interest in deterministic, fully observable, zero-sum games, where two agents act.

Zero-Sum Games

  • Adversarial: Pure competition
  • Agents have different values on the outcomes.
  • One agent maximizes a single value, and the other minimizes it.
  • Each move by one player is called a "ply".

Embedded Thinking

  • One agent tries to decide what to do, thinking about possible actions' consequences.
  • Agents must consider their opponent's actions as well.
  • Players imagine the opponent's response to their actions.
  • Embedded thinking entails.
  • There is an opponent preventing optimal planning.
  • The optimal solution is a strategy, not a sequence of actions.
  • Strategy: If the opponent does a, the agent does b; otherwise, if the opponent does c, the agent does d.
  • Model games as search problems and use heuristic evaluation functions.

Single Player

  • A single player with three moves is assumed in a tic-tac-toe game.
  • Max chooses the path that leads to his win.
  • Another player would do everything to make Max lose.

Adversarial Search

  • The minimax involves two players: Max and Min.
  • The players take turns.
  • Max moves first.
  • Max maximizes results.
  • Min minimizes the results.
  • Each node's minimax value is computed for achiveable utility against an optimal adversary.
  • Minimax value: best achievable payoff against best play.

Formalization of Game Playing

  • Initial state defines the game starting point.
  • Player(s) designate which player moves for a given turn.
  • Actions(s) defines legal moves in state s.
  • Transition function defines the result of each move (S*A->S)
  • Terminal test identifies when the game is over.
  • Terminal states indicate states where a game ends.
  • Utility(s,p): game utility function showing s terminal states for player p.
  • Chess has +1, 0, 1/2 values for win, loss or draw, respectively.
  • Tic-tac-toe has +1, -1, 0 values for utilty.

Adversarial Search: Minimax

  • The optimal strategy for Max is to use the depth-first search of the game tree.
  • A leaf node must appear at any tree level.

Minimax Principle

  • Compute the utility of being in a state assuming that both players play optimally all the way until the end of the game.
  • Propagate minimax values up the tree as soon as terminal nodes are discovered.
  • For terminal nodes, value is utility(state).
  • If the state is MAX, node value will be the highest value of all of the children.
  • If that state is MIN, node value will be set to the lowest value for the children.

Minimax Algorithm

  • The algorithm is optimal and complete for finite trees.
  • Depth-first search has O(bd) time and space.

Tic-Tac-Toe

  • Five legal moves occur on average, totaling nine moves (9 plies).
  • There is 1,953,125 5^9, and 362,880 (!9) nodes.

Chess

  • Has a b35 average branching factor, d100 depth of game tree for a typical game, and bd35^10010154 nodes.

Go

  • Branching factor starts at 361 (19X19 board).

Multiplayer Games

  • The game structure and approach differ from two-player games, with complexities arising from multiple decision-makers and alliances.

Case of Limited Resources

  • In real-time games, search leaves can't be searched.
  • Minimax can be searched to some depth practically.
  • Prune large chunks of the tree for more plies make a big difference.
  • Solution: Replace terminal utilties with eval(s) for evaluation function for non-terminal positions and eliminate large parts of the tree via pruning.

Alpha-Beta Pruning

  • Just like minimax, pruning performs a depth-first search.
  • Pruning requires keeping track of an alpha and beta.
  • Alpha is the largest value for Max across children, representing a lower bound on Max's outcome.
  • Beta is the lowest value for MIN across explored children, signifying an upper bound on MIN’s outcome.

Propagation and Pruning

  • Send alpha and beta metrics down for search pruning.
  • Update terminal nodes.
  • Update Alpha only at Max nodes and update Beta only at Min nodes.
  • Prune any branches at alpha 2 is applied
  • Apply domain knowledge to craft useful features.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Artificial Intelligence Quiz
5 questions
Artificial Intelligence and Game Playing
10 questions
AI Game Playing: Importance and History
5 questions
Use Quizgecko on...
Browser
Browser