Document Details

LightHeartedJadeite5641

Uploaded by LightHeartedJadeite5641

Inderprastha Engineering College, Ghaziabad

Tags

artificial intelligence problem solving search algorithms optimization problems

Summary

This document provides an overview of AI problem-solving methods, including search strategies. It presents various problem types and algorithms, such as search strategies, optimization, and constraint satisfaction problems, and explains different concepts with useful examples.

Full Transcript

UNIT 2 Problem solving Methods – Search Strategies- Uninformed – Informed – Heuristics – Local Search Algorithms and Optimization Problems - Searching with Partial Observations – Constraint Satisfaction Problems – Constraint Propagation - Backtracking Search – Game Playing – Optimal Decisions in Gam...

UNIT 2 Problem solving Methods – Search Strategies- Uninformed – Informed – Heuristics – Local Search Algorithms and Optimization Problems - Searching with Partial Observations – Constraint Satisfaction Problems – Constraint Propagation - Backtracking Search – Game Playing – Optimal Decisions in Games – Alpha – Beta Pruning – Stochastic Games 2.1 PROBLEM SOLVING BY SEARCH An important aspect of intelligence is goal-based problem solving. The solution of many problems can be described by finding a sequence of actions that lead to a desirable goal. Each action changes the state and the aim is to find the sequence of actions and states that lead from the initial (start) state to a final (goal) state. A well-defined problem can be described by: Initial state Operator or successor function - for any state x returns s(x), the set of states reachable from x with one action State space - all states reachable from initial by any sequence of actions Path - sequence through state space Path cost - function that assigns a cost to a path. Cost of a path is the sum of costs of individual actions along the path Goal test - test to determine if at goal state What is Search? Search is the systematic examination of states to find path from the start/root state to the goal state. The set of possible states, together with operators defining their connectivity constitute the search space. The output of a search algorithm is a solution, that is, a path from the initial state to a state that satisfies the goal test. Problem-solving agents A Problem solving agent is a goal-based agent. It decide what to do by finding sequence of actions that lead to desirable states. The agent can adopt a goal and aim at satisfying it. To illustrate the agent’s behavior, let us take an example where our agent is in the city of Arad, which is in Romania. The agent has to adopt a goal of getting to Bucharest. Goal formulation, based on the current situation and the agent’s performance measure, is the first step in problem solving. The agent’s task is to find out which sequence of actions will get to a goal state. Problem formulation is the process of deciding what actions and states to consider given a goal. Example: Route finding problem Referring to figure On holiday in Romania : currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal: be in Bucharest Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest Problem formulation A problem is defined by four items: initial state e.g., “at Arad" successor function S(x) = set of action-state pairs e.g., S(Arad) = {[Arad - >Zerind;Zerind],….} goal test, can be explicit, e.g., x = at Bucharest" implicit, e.g., NoDirt(x) path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x; a; y) is the step cost, assumed to be >= 0 A solution is a sequence of actions leading from the initial state to a goal state. Goal formulation and problem formulation 2.2 EXAMPLE PROBLEMS The problem solving approach has been applied to a vast array of task environments. Some best known problems are summarized below. They are distinguished as toy or real-world problems A toy problem is intended to illustrate various problem solving methods. It can be easily used by different researchers to compare the performance of algorithms. A real world problem is one whose solutions people actually care about. 2.3 TOY PROBLEMS Vacuum World Example o States: The agent is in one of two locations, each of which might or might not contain dirt. Thus there are 2 x 22 = 8 possible world states. o Initial state: Any state can be designated as initial state. o Successor function: This generates the legal states that results from trying the three actions (left, right, suck). The complete state space is shown in figure o Goal Test: This tests whether all the squares are clean. o Path test: Each step costs one, so that the path cost is the number of steps in the path. Vacuum World State Space Figure 2.1 The state space for the vacuum world. Arcs denote actions: L = Left, R = Right The 8-puzzle An 8-puzzle consists of a 3x3 board with eight numbered tiles and a blank space. A tile adjacent to the balank space can slide into the space. The object is to reach the goal state, as shown in Figure 2.4 Example: The 8-puzzle Figure 2.2 A typical instance of 8-puzzle The problem formulation is as follows: o States : A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. o Initial state : Any state can be designated as the initial state. It can be noted that any given goal can be reached from exactly half of the possible initial states. o Successor function : This generates the legal states that result from trying the four actions(blank moves Left, Right, Up or down). o Goal Test : This checks whether the state matches the goal configuration shown in Figure(Other goal configurations are possible) o Path cost : Each step costs 1,so the path cost is the number of steps in the path. The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as test problems for new search algorithms in AI. This general class is known as NP-complete. The 8-puzzle has 9!/2 = 181,440 reachable states and is easily solved. The 15 puzzle ( 4 x 4 board ) has around 1.3 trillion states, an the random instances can be solved optimally in few milli seconds by the best search algorithms. The 24-puzzle (on a 5 x 5 board) has around 1025 states and random instances are still quite difficult to solve optimally with current machines and algorithms. 8-Queens problem The goal of 8-queens problem is to place 8 queens on the chessboard such that no queen attacks any other.(A queen attacks any piece in the same row, column or diagonal). Figure 2.3 shows an attempted solution that fails: the queen in the right most column is attacked by the queen at the top left. An Incremental formulation involves operators that augments the state description, starting with an empty state. For 8-queens problem, this means each action adds a queen to the state. A complete-state formulation starts with all 8 queens on the board and move them around. In either case the path cost is of no interest because only the final state counts. Figure 2.3 8-queens problem The first incremental formulation one might try is the following: o States: Any arrangement of 0 to 8 queens on board is a state. o Initial state: No queen on the board. o Successor function: Add a queen to any empty square. o Goal Test: 8 queens are on the board, none attacked. In this formulation, we have 64.63…57 = 3 x 1014 possible sequences to investigate. A better formulation would prohibit placing a queen in any square that is already attacked. o States : Arrangements of n queens ( 0 d. Its time complexity is O(bl) and its space complete is O(bl). Depth-first-search can be viewed as a special case of depth- limited search with l = oo Sometimes, depth limits can be based on knowledge of the problem. For, example, on the map of Romania there are 20 cities. Therefore, we know that if there is a solution, it must be of length 19 at the longest, So l = 10 is a possible choice. However, it can be shown that any city can be reached from any other city in at most 9 steps. This number known as the diameter of the state space, gives us a better depth limit. Depth-limited-search can be implemented as a simple modification to the general tree- search algorithm or to the recursive depth-first-search algorithm. The pseudocode for recursive depth- limited-search is shown in Figure. It can be noted that the above algorithm can terminate with two kinds of failure : the standard failure value indicates no solution; the cutoffvalue indicates no solution within the depth limit. Depth-limited search = depth-first search with depth limit l,returns cut off if any path is cut off by depth limit function Depth-Limited-Search( problem, limit) returns a solution/fail/cutoff return Recursive-DLS(Make-Node(Initial-State[problem]), problem, limit) function Recursive- DLS(node, problem, limit) returns solution/fail/cutoff cutoff-occurred? false if Goal-Test(problem,State[node]) then return Solution(node) else if Depth[node] = limit then return cutoff else for each successor in Expand(node, problem) do result Recursive-DLS(successor, problem, limit) if result = cutoff then cutoff_occurred?true else if result not = failure then return result ifcutoff_occurred? then return cutoff else return failure Figure 2.9 Recursive implementation of Depth-limited-search 2.13 ITERATIVE DEEPENING DEPTH-FIRST SEARCH Iterative deepening search (or iterative-deepening-depth-first-search) is a general strategy often used in combination with depth-first-search, that finds the better depth limit. It does this by gradually increasing the limit – first 0,then 1,then 2, and so on – until a goal is found. This will occur when the depth limit reaches d, the depth of the shallowest goal node. The algorithm is shown in Figure. Iterative deepening combines the benefits of depth-first and breadth-first-search Like depth-first-search, its memory requirements are modest; O(bd) to be precise. Like Breadth-first-search, it is complete when the branching factor is finite and optimal when the path cost is a non decreasing function of the depth of the node. Figure shows the four iterations of ITERATIVE-DEEPENING_SEARCH on a binary search tree, where the solution is found on the fourth iteration. Figure 2.10 The iterative deepening search algorithm, which repeatedly applies depth-limited- search with increasing limits. It terminates when a solution is found or if the depth limited search returns failure, meaning that no solution exists. Figure 2.11 Four iterations of iterative deepening search on a binary tree Iterative search is not as wasteful as it might seem Figure 2.12 Iterative search is not as wasteful as it might seem Properties of iterative deepening search Figure 2.13 Properties of iterative deepening search Bidirectional Search The idea behind bidirectional search is to run two simultaneous searches – one forward from the initial state and the other backward from the goal, stopping when the two searches meet in the middle The motivation is that bd/2 + bd/2 much less than, or in the figure, the area of the two small circles is less than the area of one big circle centered on the start and reaching to the goal. Figure 2.14 A schematic view of a bidirectional search that is about to succeed, when a Branch from the Start node meets a Branch from the goal node. Before moving into bidirectional search let’s first understand a few terms. Forward Search: Looking in-front of the end from start. Backward Search: Looking from end to the start back-wards. So Bidirectional Search as the name suggests is a combination of forwarding and backward search. Basically, if the average branching factor going out of node / fan-out, if fan-out is less, prefer forward search. Else if the average branching factor is going into a node/fan in is less (i.e. fan-out is more), prefer backward search. We must traverse the tree from the start node and the goal node and wherever they meet the path from the start node to the goal through the intersection is the optimal solution. The BS Algorithm is applicable when generating predecessors is easy in both forward and backward directions and there exist only 1 or fewer goal states. Figure 2.15 Comparing Uninformed Search Strategies Figure 2.16 Evaluation of search strategies, b is the branching factor; d is the depth of the shallowest solution; m is the maximum depth of the search tree; l is the depth limit. Superscript caveats are as follows: a complete if b is finite; b complete if step costs >= E for positive E; c optimal if step costs are all identical; d if both directions use breadth-first search. 2.14 SEARCHING WITH PARTIAL INFORMATION o Different types of incompleteness lead to three distinct problem types: o Sensorless problems (conformant): If the agent has no sensors at all o Contingency problem: if the environment if partially observable or if action are uncertain (adversarial) o Exploration problems: When the states and actions of the environment are unknown. o No sensor o Initial State(1,2,3,4,5,6,7,8) o After action [Right] the state (2,4,6,8) o After action [Suck] the state (4, 8) o After action [Left] the state (3,7) o After action [Suck] the state (8) o Answer : [Right, Suck, Left, Suck] coerce the world into state 7 without any sensor o Belief State: Such state that agent belief to be there Partial knowledge of states and actions: – sensorless or conformant problem – Agent may have no idea where it is; solution (if any) is a sequence. – contingency problem – Percepts provide new information about current state; solution is a tree or policy; often interleave search and execution. – If uncertainty is caused by actions of another agent: adversarial problem – exploration problem – When states and actions of the environment are unknown. Figure 2.17 states and actions of the environment are unknown Figure 2.18 states and actions Contingency, start in {1,3}. Murphy’s law, Suck can dirty a clean carpet. Local sensing: dirt, location only. – Percept = [L,Dirty] ={1,3} – [Suck] = {5,7} – [Right] ={6,8} – [Suck] in {6}={8} (Success) – BUT [Suck] in {8} = failure Solution?? – Belief-state: no fixed action sequence guarantees solution Relax requirement: – [Suck, Right, if [R,dirty] then Suck] – Select actions based on contingencies arising during execution. Time and space complexity are always considered with respect to some measure of the problem difficulty. In theoretical computer science, the typical measure is the size of the state space. In AI, where the graph is represented implicitly by the initial state and successor function, the complexity is expressed in terms of three quantities: b, the branching factor or maximum number of successors of any node; d, the depth of the shallowest goal node; and m, the maximum length of any path in the state space. Search-cost - typically depends upon the time complexity but can also include the term for memory usage. Total–cost – It combines the search-cost and the path cost of the solution found. 2.15 INFORMED SEARCH AND EXPLORATION Informed (Heuristic) Search Strategies Informed search strategy is one that uses problem-specific knowledge beyond the definition of the problem itself. It can find solutions more efficiently than uninformed strategy. Best-first search Best-first search is an instance of general TREE-SEARCH or GRAPH-SEARCH algorithm in which a node is selected for expansion based on an evaluation function f(n). The node with lowest evaluation is selected for expansion, because the evaluation measures the distance to the goal. This can be implemented using a priority-queue, a data structure that will maintain the fringe in ascending order of f-values. 2.16 HEURISTIC FUNCTIONS A heuristic function or simply a heuristic is a function that ranks alternatives in various search algorithms at each branching step basing on an available information in order to make a decision which branch is to be followed during a search. The key component of Best-first search algorithm is a heuristic function, denoted by h(n): h(n) = estimated cost of the cheapest path from node n to a goal node. For example, in Romania, one might estimate the cost of the cheapest path from Arad to Bucharest via a straight-line distance from Arad to Bucharest (Figure 2.19). Heuristic function are the most common form in which additional knowledge is imparted to the search algorithm. Greedy Best-first search Greedy best-first search tries to expand the node that is closest to the goal, on the grounds that this is likely to a solution quickly. It evaluates the nodes by using the heuristic function f(n) = h(n). Taking the example of Route-finding problems in Romania, the goal is to reach Bucharest starting from the city Arad. We need to know the straight-line distances to Bucharest from various cities as shown in Figure. For example, the initial state is In(Arad),and the straight line distance heuristic hSLD (In(Arad)) is found to be 366. Using the straight-line distance heuristic hSLD, the goal state can be reached faster. Figure 2.19 Values of hSLD - straight line distances to Bucharest Figure 2.20 progress of greedy best-first search Figure shows the progress of greedy best-first search using hSLD to find a path from Arad to Bucharest. The first node to be expanded from Arad will be Sibiu, because it is closer to Bucharest than either Zerind or Timisoara. The next node to be expanded will be Fagaras, because it is closest. Fagaras in turn generates Bucharest, which is the goal. Properties of greedy search o Complete: No–can get stuck in loops, e.g., Iasi !Neamt !Iasi !Neamt ! Complete in finite space with repeated-state checking o Time: O(bm), but a good heuristic can give dramatic improvement o Space: O(bm) - keeps all nodes in memory o Optimal: No Greedy best-first search is not optimal, and it is incomplete. The worst-case time and space complexity is O(bm),where m is the maximum depth of the search space. A* SEARCH A* Search is the most widely used form of best-first search. The evaluation function f(n) is obtained by combining (1) g(n) = the cost to reach the node, and (2) h(n) = the cost to get from the node to the goal : f(n) = g(n) + h(n). A* Search is both optimal and complete. A* is optimal if h(n) is an admissible heuristic. The obvious example of admissible heuristic is the straight-line distance hSLD. It cannot be an overestimate. A* Search is optimal if h(n) is an admissible heuristic – that is, provided that h(n) never overestimates the cost to reach the goal. An obvious example of an admissible heuristic is the straight-line distance hSLD that we used in getting to Bucharest. The progress of an A* tree search for Bucharest is shown in Figure The values of ‘g ‘ are computed from the step costs shown in the Romania map(figure). Also the values of hSLD are given in Figure Figure 2.21 A* Search Figure 2.22 Example A* Search 2.17 LOCAL SEARCH ALGORITHMS AND OPTIMIZATION PROBLEMS o In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution o For example, in the 8-queens problem, what matters is the final configuration of queens, not the order in which they are added. o In such cases, we can use local search algorithms. They operate using a single current state (rather than multiple paths) and generally move only to neighbors of that state. o The important applications of these class of problems are (a) integrated-circuit design, (b) Factory-floor layout, (c) job-shop scheduling, (d) automatic programming, (e) telecommunications network optimization, (f) Vehicle routing, and (g) portfolio management. Key advantages of Local Search Algorithms (1) They use very little memory – usually a constant amount; and (2) they can often find reasonable solutions in large or infinite(continuous) state spaces for which systematic algorithms are unsuitable. 2.18 OPTIMIZATION PROBLEMS In addition to finding goals, local search algorithms are useful for solving pure optimization problems, in which the aim is to find the best state according to an objective function. State Space Landscape To understand local search, it is better explained using state space landscape as shown in Figure. A landscape has both “location” (defined by the state) and “elevation” (defined by the value of the heuristic cost function or objective function). If elevation corresponds to cost, then the aim is to find the lowest valley – a global minimum; if elevation corresponds to an objective function, then the aim is to find the highest peak – a global maximum. Local search algorithms explore this landscape. A complete local search algorithm always finds a goal if one exists; an optimal algorithm always finds a global minimum/maximum. Figure 2.23 A one dimensional state space landscape in which elevation corresponds to the objective function. The aim is to find the global maximum. Hill climbing search modifies the current state to try to improve it, as shown by the arrow. The various topographic features are defined in the text Hill-climbing search The hill-climbing search algorithm as shown in figure, is simply a loop that continually moves in the direction of increasing value – that is, uphill. It terminates when it reaches a “peak” where no neighbor has a higher value. function HILL-CLIMBING( problem) return a state that is a local maximum input: problem, a problem local variables: current, a node. neighbor, a node. current ←MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor ← a highest valued successor of current if VALUE [neighbor] ≤ VALUE[current] then return STATE[current] current ←neighbor Figure 2.24 The hill-climbing search algorithm (steepest ascent version), which is the most basic local search technique. At each step the current node is replaced by the best neighbor; the neighbor with the highest VALUE. If the heuristic cost estimate h is used, we could find the neighbor with the lowest h. Hill-climbing is sometimes called greedy local search because it grabs a good neighbor state without thinking ahead about where to go next. Greedy algorithms often perform quite well. Problems with hill-climbing Hill-climbing often gets stuck for the following reasons : Local maxima: a local maximum is a peak that is higher than each of its neighboring states, but lower than the global maximum. Hill-climbing algorithms that reach the vicinity of a local maximum will be drawn upwards towards the peak, but will then be stuck with nowhere else to go Ridges: A ridge is shown in Figure 2.10. Ridges results in a sequence of local maxima that is very difficult for greedy algorithms to navigate. Plateaux: A plateau is an area of the state space landscape where the evaluation function is flat. It can be a flat local maximum, from which no uphill exit exists, or a shoulder, from which it is possible to make progress. Figure 2.25 Illustration of why ridges cause difficulties for hill-climbing. The grid of states(dark circles) is superimposed on a ridge rising from left to right, creating a sequence of local maxima that are not directly connected to each other. From each local maximum, all the available options point downhill. Hill-climbing variations Stochastic hill-climbing o Random selection among the uphill moves. o The selection probability can vary with the steepness of the uphill move. First-choice hill-climbing o cfr. stochastic hill climbing by generating successors randomly until a better one is found. Random-restart hill-climbing o Tries to avoid getting stuck in local maxima. Simulated annealing search A hill-climbing algorithm that never makes “downhill” moves towards states with lower value (or higher cost) is guaranteed to be incomplete, because it can stuck on a local maximum. In contrast, a purely random walk –that is, moving to a successor choosen uniformly at random from the set of successors – is complete, but extremely inefficient. Simulated annealing is an algorithm that combines hill-climbing with a random walk in someway that yields both efficiency and completeness. Figure shows simulated annealing algorithm. It is quite similar to hill climbing. Instead of picking the best move, however, it picks the random move. If the move improves the situation, it is always accepted. Otherwise, the algorithm accepts the move with some probability less than 1. The probability decreases exponentially with the “badness” of the move – the amount E by which the evaluation is worsened. Simulated annealing was first used extensively to solve VLSI layout problems in the early 1980s. It has been applied widely to factory scheduling and other large-scale optimization tasks. Figure 2.26 The simulated annealing search algorithm, a version of stochastic hill climbing where some downhill moves are allowed. Genetic algorithms A Genetic algorithm (or GA) is a variant of stochastic beam search in which successor states are generated by combining two parent states, rather than by modifying a single state Like beam search, Gas begin with a set of k randomly generated states, called the population. Each state, or individual, is represented as a string over a finite alphabet – most commonly, a string of 0s and 1s. For example, an 8 8-quuens state must specify the positions of 8 queens, each in a column of 8 squares, and so requires 8 x log2 8 = 24 bits. Figure 2.27 Genetic algorithm Figure shows a population of four 8-digit strings representing 8-queen states. The production of the next generation of states is shown in Figure In (b) each state is rated by the evaluation function or the fitness function. In (c),a random choice of two pairs is selected for reproduction, in accordance with the probabilities in (b). Figure describes the algorithm that implements all these steps. function GENETIC_ALGORITHM( population, FITNESS-FN) return an individual input: population, a set of individuals FITNESS-FN, a function which determines the quality of the individual repeat new_population←empty set loop for ifrom 1 to SIZE(population) do x ←RANDOM_SELECTION(population, FITNESS_FN) y ←RANDOM_SELECTION(population, FITNESS_FN) child ←REPRODUCE(x,y) if (small random probability) then child MUTATE(child ) add child to new_population population ←new_population until some individual is fit enough or enough time has elapsed return the best individual Figure 2.28 A genetic algorithm. 2.29 CONSTRAINT SATISFACTION PROBLEMS(CSP) A Constraint Satisfaction Problem(or CSP) is defined by a set of variables,X1,X2,….Xn, and a set of constraints C1,C2,…,Cm. Each variable Xi has a nonempty domain D,of possible values. Each constraint Ci involves some subset of variables and specifies the allowable combinations of values for that subset. A State of the problem is defined by an assignment of values to some or all of the variables,{Xi = vi,Xj = vj,…}. An assignment that does not violate any constraints is called a consistent or legal assignment. A complete assignment is one in which every variable is mentioned, and a solution to a CSP is a complete assignment that satisfies all the constraints. Some CSPs also require a solution that maximizes an objective function. Example for Constraint Satisfaction Problem Figure shows the map of Australia showing each of its states and territories. We are given the task of coloring each region either red, green, or blue in such a way that the neighboring regions have the same color. To formulate this as CSP, we define the variable to be the regions :WA,NT,Q,NSW,V,SA, and T. The domain of each variable is the set {red,green,blue}.The constraints require neighboring regions to have distinct colors; for example, the allowable combinations for WA and NT are the pairs {(red,green),(red,blue),(green,red),(green,blue),(blue,red),(blue,green)}. The constraint can also be represented more succinctly as the inequality WA not = NT, provided the constraint satisfaction algorithm has some way to evaluate such expressions.) There are many possible solutions such as { WA = red, NT = green, Q = red, NSW = green, V = red,SA = blue,T = red}. It is helpful to visualize a CSP as a constraint graph, as shown in Figure 2.29. The nodes of the graph corresponds to variables of the problem and the arcs correspond to constraints. Figure 2.29 Principle states and territories of Australia. Coloring this map can be viewed as a constraint satisfaction problem. The goal is to assign colors to each region so that no neighboring regions have the same color. Figure 2.30 Mapping Problem CSP can be viewed as a standard search problem as follows: Initial state: the empty assignment {},in which all variables are unassigned. Successor function: a value can be assigned to any unassigned variable, provided that it does not conflict with previously assigned variables. Goal test: the current assignment is complete. Path cost: a constant cost(E.g.,1) for every step. Every solution must be a complete assignment and therefore appears at depth n if there are n variables. Depth first search algorithms are popular for CSPs Varieties of CSPs (i) Discrete variables Finite domains The simplest kind of CSP involves variables that are discrete and have finite domains. Map coloring problems are of this kind. The 8-queens problem can also be viewed as finite- domain CSP, where the variables Q1,Q2,…..Q8 are the positions each queen in columns 1,….8 and each variable has the domain {1,2,3,4,5,6,7,8}. If the maximum domain size of any variable in a CSP is d, then the number of possible complete assignments is O(dn) – that is, exponential in the number of variables. Finite domain CSPs include Boolean CSPs, whose variables can be either true or false. Infinite domains Discrete variables can also have infinite domains – for example, the set of integers or the set of strings. With infinite domains, it is no longer possible to describe constraints by enumerating all allowed combination of values. Instead a constraint language of algebric inequalities such as Startjob1 + 5 = β. Each node has to keep track of its alpha and beta values. Alpha can be updated only when it’s MAX’s turn and, similarly, beta can be updated only when it’s MIN’s chance. MAX will update only alpha values and MIN player will update only beta values. The node values will be passed to upper nodes instead of values of alpha and beta during go into reverse of tree. Alpha and Beta values only be passed to child nodes. Working of Alpha-beta Pruning 1. We will first start with the initial move. We will initially define the alpha and beta values as the worst case i.e. α = -∞ and β= +∞. We will prune the node only when alpha becomes greater than or equal to beta. Figure 2.46 Step 1 Alpha-beta Pruning 2. Since the initial value of alpha is less than beta so we didn’t prune it. Now it’s turn for MAX. So, at node D, value of alpha will be calculated. The value of alpha at node D will be max (2, 3). So, value of alpha at node D will be 3. 3. Now the next move will be on node B and its turn for MIN now. So, at node B, the value of alpha beta will be min (3, ∞). So, at node B values will be alpha= – ∞ and beta will be 3. Figure 2.47 Step 2 Alpha-beta Pruning In the next step, algorithms traverse the next successor of Node B which is node E, and the values of α= -∞, and β= 3 will also be passed. 4. Now it’s turn for MAX. So, at node E we will look for MAX. The current value of alpha at E is – ∞ and it will be compared with 5. So, MAX (- ∞, 5) will be 5. So, at node E, alpha = 5, Beta = 5. Now as we can see that alpha is greater than beta which is satisfying the pruning condition so we can prune the right successor of node E and algorithm will not be traversed and the value at node E will be 5. Figure 2.48 Step 3 Alpha-beta Pruning 6. In the next step the algorithm again comes to node A from node B. At node A alpha will be changed to maximum value as MAX (- ∞, 3). So now the value of alpha and beta at node A will be (3, + ∞) respectively and will be transferred to node C. These same values will be transferred to node F. 7. At node F the value of alpha will be compared to the left branch which is 0. So, MAX (0, 3) will be 3 and then compared with the right child which is 1, and MAX (3,1) = 3 still α remains 3, but the node value of F will become 1. Figure 2.49 Step 4 Alpha-beta Pruning 8. Now node F will return the node value 1 to C and will compare to beta value at C. Now its turn for MIN. So, MIN (+ ∞, 1) will be 1. Now at node C, α= 3, and β= 1 and alpha is greater than beta which again satisfies the pruning condition. So, the next successor of node C i.e. G will be pruned and the algorithm didn’t compute the entire subtree G. Figure 2.50 Step 5 Alpha-beta Pruning Now, C will return the node value to A and the best value of A will be MAX (1, 3) will be 3. Figure 2.51 Step 6 Alpha-beta Pruning The above represented tree is the final tree which is showing the nodes which are computed and the nodes which are not computed. So, for this example the optimal value of the maximizer will be 3.

Use Quizgecko on...
Browser
Browser