Unit_2_Problems,State_Space_Search_&_Heuristic_Search.pptx

Full Transcript

Department of Computer Engineering (3170716 – Artificial Intelligence) (Unit : 2) Problems, State Space Search & Heuristic Search Unit-2 State Space Search Outlines: Defining the Problems Problem Solving. Problem Characteristics. Production System....

Department of Computer Engineering (3170716 – Artificial Intelligence) (Unit : 2) Problems, State Space Search & Heuristic Search Unit-2 State Space Search Outlines: Defining the Problems Problem Solving. Problem Characteristics. Production System. Issues in design. Uninformed Search. Uniform Cost Search. Problem Reduction. Constraint Satisfaction. Mean-end Analysis. Unit-2 State Space Search Outlines: Defining the Problems Problem Solving. Problem Characteristics. Production System. Issues in design. Uninformed Search. Uniform Cost Search. Problem Reduction. Constraint Satisfaction. Mean-end Analysis. Unit-2 State Space Search A state space represents the set of all possible states that a system or problem can exhibit. It encompasses the initial state, goal state(s), and all the intermediate states that can be reached by applying actions or transformations. Think of it as a vast landscape of possibilities, where each point represents a unique configuration of the problem at hand. Search algorithms are systematic procedures that traverse the state space, evaluating different states and making informed decisions about which paths to explore further. In conclusion, state spaces and search algorithms provide a systematic approach to explore and solve complex problems. By representing the problem domain as a state space and employing search algorithms, we can effectively navigate through the possibilities and find optimal or satisfactory solutions. Unit-2 State Space Search Building a system to solve a problem requires the following steps: Define the problem precisely including detailed specifications and what constitutes an acceptable solution Analyze the problem thoroughly for some features may have a dominant affect on the chosen method of solution Isolate and represent the background knowledge needed in the solution of the problem Choose the best problem solving techniques in the solution Unit-2 State Space Search Problem = Searching for the Goal State A state represents a status of the solution at a given step of the problem solving procedure. The solution of a problem, thus, is a collection of the problem states. The problem solving procedure applies an operator/operation to a state to get the next state. Then it applies another operator to the resulting state to derive a new state. The process of applying an operator to a state and its subsequent transition to the next state, thus, is continued until the goal (desired) state is derived. Such a method of solving a problem is generally referred to as state space approach Unit-2 State Space Search Define a state space that contains all possible configurations of the relevant objects, without enumerating all the states in it. A state space represents a problem in terms of states and operators that change states Define some of these states as possible initial states. Specify one or more as acceptable solutions, these are goal states; Specify a set of rules as the possible actions allowed. This involves thinking about the generality of the rules, the assumptions made in the informal presentation and how much work can be anticipated by inclusion in the rules. Unit-2 Problem Formulation A single state problem formulation consists of 4 steps; Initial state, successor function, goal test and path cost. Problem formulation means choosing a relevant set of states to consider and a feasible set of operators for moving from one state to another. Search is the process of imagining sequences of operators applied to the initial state and checking which sequence reaches a goal state. Unit-2 State Space Search Water Jug Problem: There are two jugs called X and Y; X holds a maximum of four gallons and Y a maximum of three gallons. How can we get 2 gallons in the jug four. The state space is a set of ordered pairs giving the number of gallons in the pair of jugs at any time i.e. (X, Y) where X = 0, 1, 2, 3, 4 and Y = 0, 1, 2, 3. The start state is (0,0) and the goal state is (2,n) where n is a don't care but is limited to three holding from 0 to 3 gallons. Unit-2 State Space Search Production rules for Water-Jug Problem Unit-2 State Space Search Solution to Water-Jug Problem Unit-2 State Space Search Requirements of a good search strategy: 1. It causes motion Otherwise, it will never lead to a solution. 2. It is systematic Otherwise, it may use more steps than necessary. 3. It is efficient Find a good, but not necessarily the best, answer. Unit-2 State Space Search Search Algorithm Unit-2 State Space Search Informed Search Uninformed Search It uses knowledge for the searching process. It doesn’t use knowledge for searching process. It finds solution slow as compared to informed It finds solution more quickly. search. It may or may not be complete. It is always complete. Cost is low. Cost is high. It consumes less time. It consumes moderate time. It provides the direction regarding the solution. No suggestion is given regarding the solution in it. It is less lengthy while implementation. It is more lengthy while implementation. Greedy Search, A* Search, Graph Search Depth First Search, Breadth First Search Unit-2 State Space Search Breadth First Search It starts from the root node, explores the neighboring nodes first and moves towards the next level neighbors. It generates one tree at a time until the solution is found. It can be implemented using FIFO queue data structure. This method provides shortest path to the solution. If branching factor (average number of child nodes for a given node) = b and depth = d, then number of nodes at level d = bd. Unit-2 State Space Search Breadth First Search Unit-2 State Space Search BFS tree for Water-Jug Problem Unit-2 State Space Search Depth First Search The DFS algorithm is a recursive algorithm that uses the idea of backtracking. It involves exhaustive searches of all the nodes by going ahead, if possible, else by backtracking. Here, the word backtrack means that when you are moving forward and there are no more nodes along the current path, you move backwards on the same path to find nodes to traverse. All the nodes will be visited on the current path till all the unvisited nodes have been traversed after which the next path will be selected. This recursive nature of DFS can be implemented using stacks. Unit-2 State Space Search Depth First Search Unit-2 State Space Search Unit-2 State Space Search Criterion Breadth-First Depth-First Time bd bm Space bd bm Optimal? Yes No Complete? Yes No b: branching factor d: solution depth m: maximum depth Unit-2 State Space Search Uniform Cost Search Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This algorithm comes into play when a different cost is available for each edge. The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest cumulative cost. Uniform-cost search expands nodes according to their path costs form the root node. It can be used to solve any graph/tree where the optimal cost is in demand. A uniform-cost search algorithm is implemented by the priority queue. It gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same. Unit-2 State Space Search Algorithm for uniform cost search: Insert the root node into the priority queue Repeat while the queue is not empty: Remove the element with the highest priority If the removed node is the destination, Print total cost and stop the algorithm Else, Enqueue all the children of the current node to the priority queue, with their cumulative cost from the root as priority Unit-2 State Space Search Unit-2 State Space Search To choose an appropriate method for a particular problem: Is the problem decomposable? Can solution steps be ignored or undone? Is the universe predictable? Is a good solution absolute or relative? Is the solution a state or a path? What is the role of knowledge? Does the task require human-interaction? Unit-2 State Space Search Can the problem be broken down to smaller problems to be solved independently? Decomposable problem can be solved easily. Unit-2 State Space Search Can solution steps be ignored or undone? Theorem Proving A lemma that has been proved can be ignored for next steps. Ignorable! The 8-Puzzle 2 8 3 1 2 3 1 6 4 8 4 7 5 7 6 5 Moves can be undone and backtracked. Recoverable! Unit-2 State Space Search Playing Chess Moves cannot be retracted. Irrecoverable! Ignorable problems solution steps can be ignored e.g. Theorem proving Recoverable problems solution steps can be undone e.g. 8-puzzle Irrecoverable problems solution steps can not be undone e.g chess Unit-2 State Space Search Is the universe predictable? The 8-Puzzle Every time we make a move, we know exactly what will happen. Certain outcome! Playing Bridge We cannot know exactly where all the cards are or what the other players will do on their turns. Uncertain outcome! For certain-outcome problems, planning can used to generate a sequence of operators that is guaranteed to lead to a solution. For uncertain-outcome problems, a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided. Unit-2 State Space Search Is a good solution absolute or relative? Is Marcus alive? Simple facts are define in database Solution-1 1. Marcus was a man. 1.Marcus was a man. 2. Marcus was a Pompeian. 2.All men are mortal. 3. Marcus was born in 40 A.D. 8.Marcus is mortal. 1,4 4. All men are mortal. 9.Marcus was born in 40 A.D. 5. All Pompeians died when the volcano 7.It is now 1991 A.D. erupted in 79 A.D. 9.Marcus age is 1951 years. 6. No mortal lives longer than 150 years. 10.No mortal lives longer than 150 years 7. It is now 1991 A.D. 10 Marcus is dead 8. It is now 1991 A.D. Is Marcus alive? Different reasoning paths lead to the answer. It does not matter which path we follow. Unit-2 State Space Search Is the solution a state or a path? Finding a consistent interpretation “The bank president ate a dish of pasta salad with the fork”. ○ “bank” refers to a financial situation or to a side of a river? ○ “dish” or “pasta salad” was eaten? ○ Does “pasta salad” contain pasta, as “dog food” does not contain “dog”? ○ Which part of the sentence does “with the fork” modify? What if “with vegetables” is there? No record of the processing is necessary. The Water Jug Problem The path that leads to the goal must be reported. Unit-2 State Space Search Is the solution a state or a path? A path-solution problem can be reformulated as a state-solution problem by describing a state as a partial path to a solution. The question is whether that is natural or not. Unit-2 State Space Search Does the task require human-interaction? Solitary problem, in which there is no intermediate communication and no demand for an explanation of the reasoning process. Conversational problem, in which intermediate communication is to provide either additional assistance to the computer or additional information to the user. Unit-2 State Space Search Problem Classification There is a variety of problem-solving methods, but there is no one single way of solving all problems. Not all new problems should be considered as totally new. Solutions of similar problems can be exploited. Unit-2 State Space Search Algorithm 1. Generate a possible solution. 2. Test to see if this is actually a solution. 3. Quit if a solution has been found. Otherwise, return to step 1. Acceptable for simple problems. Inefficient for problems with large space. Unit-2 State Space Search Exhaustive generate‐and‐test. Heuristic generate‐and‐test: not consider paths that seem unlikely to lead to a solution. Plan generate‐test: ○ Create a list of candidates. ○ Apply generate‐and‐test to that list. Unit-2 Hill Climbing Hill Climbing is a heuristic search used for mathematical optimization problems in the field of Artificial Intelligence. Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good solution to the problem. This solution may not be the global optimal maximum. In the above definition, mathematical optimization problems implies that hill-climbing solves the problems where we need to maximize or minimize a given real function by choosing values from the given inputs. Example-Travelling salesman problem where we need to minimize the distance traveled by the salesman. ‘Heuristic search’ means that this search algorithm may not find the optimal solution to the problem. However, it will give a good solution in reasonable time. A heuristic function is a function that will rank all the possible alternatives at any branching step in search algorithm based on the available information. It helps the algorithm to select the best route out of possible routes. Unit-2 Hill Climbing State-space Diagram for Hill Climbing: Unit-2 Hill Climbing Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another state which is higher than it. Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest value of objective function. Current state: It is a state in a landscape diagram where an agent is currently present. Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have the same value. Shoulder: It is a plateau region which has an uphill edge. Unit-2 Hill Climbing Following are some main features of Hill Climbing Algorithm: Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate and Test method produce feedback which helps to decide which direction to move in the search space. Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost. No backtracking: It does not backtrack the search space, as it does not remember the previous states. Unit-2 Hill Climbing Simple Hill Climbing Steepest-ascent Hill Climbing Stochastic Hill Climbing Random – restart Hill Climbing ( Shotgun Hill Climbing) Unit-2 Hill Climbing Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the neighbor node state at a time and selects the first one which optimizes current cost and set it as a current state. It only checks it's one successor state, and if it finds better than the current state, then move else be in the same state. Algorithm for Simple Hill Climbing: Step 1: Evaluate the initial state, if it is goal state then return success and Stop. Step 2: Loop Until a solution is found or there is no new operator left to apply. Step 3: Select and apply an operator to the current state. Step 4: Check new state: If it is goal state, then return success and quit. Else if it is better than the current state then assign new state as a current state. Else if not better than the current state, then return to step2. Step 5: Exit. Unit-2 Hill Climbing The Steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm examines all the neighboring nodes of the current state and selects one neighbor node which is closest to the goal state. This algorithm consumes more time as it searches for multiple neighbors. Algorithm for Steepest-Ascent hill climbing: Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current state as initial state. Step 2: Loop until a solution is found or the current state does not change. Let SUCC be a state such that any successor of the current state will be better than it. For each operator that applies to the current state: Apply the new operator and generate a new state. Evaluate the new state. If it is goal state, then return it and quit, else compare it to the SUCC. If it is better than SUCC, then set new state as SUCC. If the SUCC is better than the current state, then set current state to SUCC. Step 5: Exit. Unit-2 Hill Climbing Stochastic hill climbing does not examine all the neighboring nodes before deciding which node to select. It just selects a neighboring node at random and decides (based on the amount of improvement in that neighbor) whether to move to that neighbor or to examine another. Algorithm Step 1: Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make initial state as current state. Step 2: Repeat these steps until a solution is found or current state does not change. a) Select a state that has not been yet applied to the current state. b) Apply successor function to the current state and generate all the neighbor states. c) Among the generated neighbor states which are better than current state choose a state randomly (or based on some probability function). d) If the chosen state is goal state, then return success, else make it current state and repeat step 2: b) part. Step 3: Exit. Unit-2 State Space Search Simple Hill Climbing Steepest Hill Climbing Stochastic Hill Climbing All the successor nodes are It compares the first closer compared and closest to solution It examines all the neighbors node. before deciding how to move. is chosen. It only evaluates the neighbor node state at a time. It is similar to best-first search. It selects neighbor at random. Less Time Consuming. Comparatively more time Time consuming. consuming. Solution is not guaranteed. Solution is guaranteed. Solution is guaranteed. Less Optimal. Depends on Problem. Most Optimal. Unit-2 Problems of Hill Climbing Hill climbing cannot reach the optimal/best state(global maximum) if it enters any of the following regions : Local maximum: At a local maximum all neighboring states have a value that is worse than the current state. To overcome the local maximum problem: Utilize the backtracking technique. Maintain a list of visited states. If the search reaches an undesirable state, it can backtrack to the previous configuration and explore a new path. Plateau: On the plateau, all neighbors have the same value. Hence, it is not possible to select the best direction. To overcome plateaus, Make a big jump. Randomly select a state far away from the current state. Chances are that we will land in a non-plateau region. Unit-2 Beam Search A heuristic search algorithm that examines a graph by extending the most promising node in a limited set is known as beam search. Beam search is a heuristic search technique that always expands the W number of the best nodes at each level. Here, the width of the beam search is denoted by W. It progresses level by level and moves downwards only from the best W nodes at each level. Beam Search uses breadth-first search to build its search tree. Beam Search constructs its search tree using breadth-first search. It generates all the successors of the current level’s state at each level of the tree. However, at each level, it only evaluates a W number of states. Other nodes are not taken into account. The heuristic cost associated with the node is used to choose the best nodes. If B is the branching factor, at every depth, there will always be W × B nodes under consideration, but only W will be chosen. More states are trimmed when the beam width is reduced. Unit-2 Beam Search When W = 1, the search becomes a hill-climbing search in which the best node is always chosen from the successor nodes. No states are pruned if the beam width is unlimited, and the beam search is identified as a breadth-first search. The beamwidth bounds the amount of memory needed to complete the search, but it comes at the cost of completeness and optimality (possibly that it will not find the best solution). The reason for this danger is that the desired state could have been pruned. Unit-2 Beam Search Unit-2 Example of Beam Search Unit-2 Variable Neighbourhood Search Based on three facts: 1. A local minimum w.r.t. one neighborhood structure is not necessary so with another 2. A global minimum is local minimum w.r.t. all possible neighborhood structures 3. For many problems local minima w.r.t. one or several neighborhoods are close to each other Unit-2 Variable Neighbourhood Search Basic VNS Algorithm Of course, for local search Initialization (initial solution and stopping cond.) Repeat until stopping condition k=1 ○ Repeat until Shaking – Generate random point at k-th neighborhood Local search – find local optimum Move or not Unit-2 Variable Neighbourhood Search Variations of VNS Variable neighborhood descent (VND) Reduced VNS (RVNS) Skewed VNS (SVNS) General VNS (GVNS) VN Decomposition Search (VNDS) Parallel VNS (PVNS) Primal Dual VNS (P-D VNS) Reactive VNS Backward-Forward VNS Best improvement VNS Exterior point VNS VN Simplex Search (VNSS) Unit-2 State Space Search Search Algorithm Unit-2 Best-first Search Greedy best-first search algorithm always selects the path which appears best at that moment. It is the combination of depth-first search and breadth-first search algorithms. f(n)= g(n). Best first search algorithm: Step 1: Place the starting node into the OPEN list. Step 2: If the OPEN list is empty, Stop and return failure. Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in the CLOSED list. Step 4: Expand the node n, and generate the successors of node n. Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any successor node is goal node, then return success and terminate the search, else proceed to Step 6. Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if the node has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the OPEN list. Step 7: Return to Step 2. Unit-2 Best-first Search Consider the below search problem, and we will traverse it using greedy best-first search. At each iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in the below table. Open Close Unit-2 A* Algorithm The A* algorithm uses a modified evaluation function and a Best-First search. A* minimizes the total path cost. Under the right conditions A* provides the cheapest cost solution in the optimal time! Unit-2 A* Algorithm The evaluation function f is an estimate of the value of a node x given by: f(x) = g(x) + h’(x) g(x) is the cost to get from the start state to state x. h’(x) is the estimated cost to get from state x to the goal state (the heuristic). Unit-2 A* Algorithm Step1: Place the starting node in the OPEN list. Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops. Step 3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node n is goal node then return success and stop, otherwise Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation function for n' and place into Open list. Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer which reflects the lowest g(n') value. Step 6: Return to Step 2. Unit-2 A* Algorithm In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all states is given in the below table so we will calculate the f(n) of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start state. Unit-2 A* Algorithm Initialization: {(S, 5)} Iteration1: {(S--> A, 4), (S-->G, 10)} Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)} Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)} Iteration 4: will give the final result, as S--->A--->C--->G it provides the optimal path with cost 6. Unit-2 A* Algorithm Points to remember: A* algorithm returns the path which occurred first, and it does not search for all remaining paths. The efficiency of A* algorithm depends on the quality of heuristic. A* algorithm expands all nodes which satisfy the condition of f(n). Complete: A* algorithm is complete as long as: Branching factor is finite. Cost at every action is fixed. Unit-2 A* Algorithm Solve the following problem using A* Algorithm: The numbers written on edges represent the distance between the nodes [g(n)]. The numbers written on nodes represent the heuristic value [red colored, h(n)]. Find the most cost-effective path to reach from start state A to final state J using A* Algorithm. Solution Unit-2 Modified State Evaluation Value of each state is a combination of the cost of the path to the state estimated cost of reaching a goal from the state. The idea is to use the path to a state to determine (partially) the rank of the state when compared to other states. This doesn’t make sense for DFS or BFS, but is useful for Best-First Search. Unit-2 Modified State Evaluation Consider a best-first search that generates the same state many times. Which of the paths leading to the state is the best ? Recall that often the path to a goal is the answer (for example, the water jug problem) Unit-2 AO* Algorithm The AO* method divides any given difficult problem into a smaller group of problems that are then resolved using the AND-OR graph concept. The AND side of the graph represents a set of tasks that must be completed to achieve the main goal, while the OR side of the graph represents different methods for accomplishing the same main goal. Unit-2 AO* Algorithm Working of AO* algorithm: The evaluation function in AO* looks like this: f(n) = g(n) + h(n) f(n) = Actual cost + Estimated cost where, f(n) = The actual cost of traversal. g(n) = the cost from the initial node to the current node. h(n) = estimated cost from the current node to the goal state. Unit-2 AO* Example Here in the example below the Node which is given has the heuristic value i.e h(n). Edge length is considered as 1. Unit-2 AO* Example Solve the following problem using AO* Algorithm: All the numbers written in brackets represent the heuristic value [h(n)]. Each edge is considered to have value of 1 by default [i.e g(n) = 1]. Solution Unit-2 AO* Example Solve the following problem using AO* Algorithm: All the numbers written in brackets represent the heuristic value [h(n)]. Each edge is considered to have value of 1 by default [i.e g(n) = 1]. Solution Unit-2 Comparison A* Algorithm AO* Algorithm Works on Best-First Search Works on Best-First Search Informed Search technique Informed Search technique Works on given Heuristic values Works on given Heuristic values Always gives Optimal Solution Does not guarantee Optimal Solution Even after reaching goal state After reaching goal state further nodes further nodes are explored are not explored Moderate Memory is used Comparatively less memory is used Endless loop possible Endless loop is not possible Unit-2 Constraint Satisfaction Problem Constraint Satisfaction Problem (CSP) is a fundamental topic in artificial intelligence (AI) that deals with solving problems by identifying constraints and finding solutions that satisfy those constraints. CSP is a specific type of problem-solving approach that involves identifying constraints that must be satisfied and finding a solution that satisfies all the constraints. More formally, a CSP is defined as a triple (X,D,C), where: X is a set of variables { x1, x2,..., xn}. D is a set of domains {D1 , D2 ,..., Dn}, where each Di is the set of possible values for xi. C is a set of constraints {C1​, C2​,..., Cm​}, where each Ci​is a constraint that restricts the values that can be assigned to a subset of the variables. Unit-2 Constraint Satisfaction Problem Crypt Arithmetic Problem S E N D + M O R E _____________ M O N E Y Solution Unit-2 Constraint Satisfaction Problem Crypt Arithmetic Problem C R O S S + R O A D S ________________ D A N G E R D O N A L D + G E R A L D ________________ R O B E R T Unit-2 Constraint Satisfaction Problem Types of Constraints in CSP Unary Constraints: A unary constraint is a constraint on a single variable. For example, Variable A not equal to “Red”. Binary Constraints: A binary constraint involves two variables and specifies a constraint on their values. For example, a constraint that two tasks cannot be scheduled at the same time would be a binary constraint. Global Constraints: Global constraints involve more than two variables and specify complex relationships between them. For example, a constraint that no two tasks can be scheduled at the same time if they require the same resource would be a global constraint. Unit-2 Mean End Analysis Means end analysis (MEA) is an important concept in artificial intelligence (AI) because it enhances problem resolution. MEA solves problems by defining the goal and establishing the right action plan. This technique combines forward and backward strategies to solve complex problems. With these mixed strategies, complex problems can be tackled first, followed by smaller ones. In this technique, the system evaluates the differences between the current state or position and the target or goal state. It then decides the best action to be undertaken to reach the end goal. Unit-2 Mean End Analysis 1. First, the system evaluates the current state to establish whether there is a problem. If a problem is identified, then it means that an action should be taken to correct it. 2. The second step involves defining the target or desired goal that needs to be achieved. 3. The target goal is split into sub-goals, that are further split into other smaller goals. 4. This step involves establishing the actions or operations that will be carried out to achieve the end state. 5. In this step, all the sub-goals are linked with corresponding executable actions (operations). 6. After that is done, intermediate steps are undertaken to solve the problems in the current state. The chosen operators will be applied to reduce the differences between the current state and the end state. 7. This step involves tracking all the changes made to the actual state. Changes are made until the target state is achieved. Unit-2 Mean End Analysis Unit-2 Mean End Analysis 1. Delete Operator: The dot symbol at the top right corner in the initial state does not exist in the goal state. The dot symbol can be removed by applying the delete operator. Unit-2 Mean End Analysis 2. Move operator: We will then compare the new state with the end state. The green diamond in the new state is inside the circle while the green diamond in the end state is at the top right corner. We will move this diamond symbol to the right position by applying the move operator. 3. Expand operator: After evaluating the new state generated in step 2, we find that the diamond symbol is smaller than the one in the end state. We can increase the size of this symbol by applying the expand operator. Unit-2 Mean End Analysis Application of MEA: 1. Organizational planning 2. Business transformation 3. Gap analysis Reference Book Artificial Intelligence – Kevin Knight and Elaine Rich

Use Quizgecko on...
Browser
Browser