Chapter 3 Search Algorithms for Problem Solving PDF

Summary

This document discusses search algorithms for problem-solving in artificial intelligence. It covers various search algorithms like breadth-first search and depth-first search, along with their applications and performance measures. The document also explores different representation methods like graph and tree representation.

Full Transcript

Chapter 3 Search Algorithms for Problem Solving Pr. Meftah Boudjelal 22 1. Introduction Many AI problems are hard to solve because they are difficult to be characterized. Not only the problem but a path to the solution is also hard to be characterized. Problem solving basicall...

Chapter 3 Search Algorithms for Problem Solving Pr. Meftah Boudjelal 22 1. Introduction Many AI problems are hard to solve because they are difficult to be characterized. Not only the problem but a path to the solution is also hard to be characterized. Problem solving basically involves doing the right thing at the right time. Given a problem to solve the task is to select the right moves that would lead to the solution. In this chapter we will learn the process of solving AI problems using state space search. We will also see the difficulties a designer might face. Importance of search algorithms for problem-solving Efficiency and Optimization: Search algorithms are essential for efficiently managing resources and improving system performance. They help in minimizing the use of computational resources like time and memory, making systems faster and more reliable. By finding optimal solutions, search algorithms enable efficient resource management and enhance overall performance. Decision Making and Strategic Planning: Search algorithms play a crucial role in informed decision-making by providing the best possible solutions based on available data and constraints. They are used to explore different scenarios and evaluate outcomes, aiding in strategic planning and risk assessment. This capability is particularly valuable in fields like healthcare, finance, and manufacturing, where optimal decisions can have significant impacts. Problem-Solving in AI and Complex Problems: In artificial intelligence, search algorithms are fundamental for pathfinding in robotics, game development, and navigation systems. They are also used in planning and scheduling tasks, such as job scheduling and logistics optimization. Additionally, search algorithms are essential for solving complex combinatorial problems like the traveling salesman problem and constraint satisfaction problems, where the goal is to find solutions that satisfy all given constraints. Data Retrieval and Pattern Recognition: Search algorithms are at the core of information retrieval systems, such as search engines, databases, and recommendation systems. They are also used in pattern recognition tasks, such as image and speech recognition, where the goal is to find patterns in data. This makes them indispensable for applications that rely on efficient data retrieval and accurate pattern recognition. Optimization and Real-World Applications: Search algorithms are used in various optimization techniques, such as linear programming, dynamic programming, and genetic algorithms. They help in finding global optima in complex optimization problems, ensuring that the best possible solution is found. In real-world applications, search algorithms are used in healthcare for Pr. Meftah Boudjelal 23 diagnosis and treatment planning, in finance for portfolio optimization and risk management, and in manufacturing for supply chain optimization and production planning. Scalability and Adaptability: Search algorithms can handle large-scale problems efficiently, making them suitable for big data applications and real- time systems. They can be designed to take advantage of parallel processing, further enhancing their scalability and performance. Additionally, search algorithms can adapt to dynamic environments, making them suitable for real- time decision-making and problem-solving in changing conditions. They are robust and can handle uncertainty and variability in data and problem constraints. 2. Definitions 2.1. Problem definition A problem is a situation, condition, or issue that is perceived as difficult, confusing, or undesirable and requires a resolution or solution. 2.2. Problem for AI A problem is a well-defined, often formalized, challenge or task that an AI system is designed to address or solve. The definition and formulation of the problem are crucial steps in designing and implementing effective AI solutions. AI problems have several key characteristics: Well-Defined Objective: An AI problem typically has a clear, quantifiable goal or objective that the AI system aims to achieve, such as maximizing a reward signal, minimizing a cost function, or finding an optimal solution. Formalized Input and Output: AI problems are usually formalized with a specific input format (e.g., data, state, or environment) and a desired output format (e.g., prediction, decision, or action). Constraints and Rules: AI problems often have constraints, rules, or limitations that the system must adhere to while solving the problem, such as time limits, resource constraints, or domain-specific rules. Data-Driven: Many AI problems involve processing and analyzing data to extract insights, make predictions, or support decision-making. Uncertainty and Complexity: AI problems often involve dealing with uncertainty, incomplete information, or complex environments, requiring the AI system to make probabilistic inferences or learn from experience. 2.3. Examples of AI problems Classification: Assigning a label or category to input data, such as spam detection or image recognition. Pr. Meftah Boudjelal 24 Regression: Predicting a continuous value based on input data, such as house price prediction. Planning and Scheduling: Finding an optimal sequence of actions to achieve a goal, such as route planning or job scheduling. Optimization: Finding the best solution from a set of possible solutions, such as resource allocation or portfolio optimization. 3. The State Space 3.1. Definition A state space is a set of all possible configurations or states that a system can be in. Each state represents a specific situation or condition of the system, and the state space encompasses the entire range of these situations. The concept of a state space is fundamental to various AI techniques, including search algorithms, planning, and optimization. The problem solver begins in a given or start state and has some desired state as a goal state. The desired state can be a single state that is completely specified, or it may be a set of states described by a common property. Understanding and effectively navigating the state space is a crucial aspect of solving many AI problems. The design of the state space, the choice of operators, and the selection of search strategies are all critical factors in developing efficient and effective AI solutions. 3.2. Components of a State Space a) States: Individual configurations or conditions of the system. Each state is a unique snapshot of the system at a particular point in time. b) Initial State: The starting point or the initial configuration of the system. This is where the search or problem-solving process begins. c) Goal State: The desired outcome or the final configuration that the system aims to reach. The objective of the search or problem-solving process is to find a path from the initial state to the goal state. Pr. Meftah Boudjelal 25 d) Operators or Actions: The set of actions or transitions that can be applied to move from one state to another. These operators define how the system can change from one state to another. e) Transitions: The rules or functions that describe how applying an operator changes the state of the system. Transitions define the relationships between states. Example Consider a real-world example of a state space search problem: Navigating a Maze. State: The current position of the agent (e.g., a robot) within the maze. Initial State: The starting position of the agent at the entrance of the maze. Goal State: The target position, usually the exit of the maze. Transition: Moving from one position to an adjacent position (up, down, left, or right) within the maze. Path: A sequence of moves from the initial state to the goal state. Imagine a simple grid maze where the agent starts at the top-left corner (0,0) and needs to reach the bottom-right corner (3,3). The agent can move in four directions: up, down, left, and right, but cannot pass through walls. 1. Initial State: (0,0) 2. Goal State: (3,3) 3. Possible Transitions: o From (0,0) to (0,1) or (1,0) o From (0,1) to (0,2) or (1,1) or back to (0,0) o And so on… The agent will explore different paths, keeping track of visited states to avoid cycles, until it finds a path that leads to the goal state. 3.3. Problem solving by State space search State space search in AI is a fundamental technique used to solve problems by navigating through a series of states and transitions. In this approach, a problem is represented as a collection of states, each depicting a specific configuration, and the transitions represent possible actions or moves between these states. The objective is to find a sequence of actions that leads from an initial state to a goal state. This concept is analogous to finding a path through a complex maze: each decision or action leads to a new state, and the goal is to discover the optimal sequence of actions that leads to a desired outcome. Pr. Meftah Boudjelal 26 By applying state space search, AI systems can effectively tackle a diverse array of problems, ranging from robotics and game-playing to natural language processing and scheduling. It serves as a crucial tool for enabling machines to make intelligent decisions and find optimal solutions in complex, dynamic environments. 3.4. Principles and Features of State Space Search The efficiency and effectiveness of state space search are heavily dependent on several principles and characteristics. Understanding these elements is crucial for selecting the right search strategy and optimizing the search process. a) Expansiveness: The number of successors that each state can generate. This impacts how many new states are explored from a given state. b) Branching Factor: The average number of successors in each state. It influences the width of the search tree and the overall complexity of the search. c) Depth: The length from the initial state to the goal state in the search tree. Deeper search trees can increase the time required to find a solution. d) Completeness: A search strategy is complete if it guarantees finding a solution, assuming one exists. e) Optimality: A search strategy is optimal if it guarantees finding the best solution according to a specified criterion. f) Time Complexity: The duration of the state space exploration. It is influenced by the branching factor and the depth of the search tree. g) Space Complexity: The amount of memory required to carry out the search. This depends on the number of states that need to be stored in memory simultaneously. 3.5. Problem Representation A number of factors need to be taken into consideration when developing a state space representation. Factors that must be addressed are: What is the goal to be achieved? What are the legal moves or actions? What knowledge needs to be represented in the state description? Type of problem - There are basically three types of problems. Some problems only need a representation, e.g. crossword puzzles. Other problems require a yes or no response indicating whether a solution can be found or not. Finally, the last type problem are those that require a solution path as an output, e.g. mathematical theorems, Towers of Hanoi. In these cases, we know the goal state and we need to know how to attain this state Best solution vs. Good enough solution - For some problems a good enough solution is sufficient. For example, theorem proving, eight squares. However, some problems require a best or optimal solution, e.g. the traveling salesman problem. Pr. Meftah Boudjelal 27 3.6. Representation of a State Space Representing a state space is a fundamental step in designing algorithms for problem- solving, planning, and optimization in Artificial Intelligence (AI). The representation of a state space involves defining the states, the transitions between states, and the structure of the space. The choice of representation depends on the nature of the problem and the specific requirements of the AI system. 3.6.1. Graph Representation A graph is a natural and widely used representation for a state space. In this representation: Nodes: Represent the states of the system. Edges: Represent the transitions or actions that move the system from one state to another. Labels: Edges can be labeled with the actions or costs associated with the transitions. 3.6.2. Tree Representation A tree is a special type of graph where each node has a single parent (except the root node) and can have multiple children. Trees are useful for representing hierarchical or recursive state spaces. Root Node: Represents the initial state. Leaf Nodes: Represent terminal states, including the goal state. Internal Nodes: Represent intermediate states. Edges: Represent transitions between states. Pr. Meftah Boudjelal 28 Example: Eight-Puzzle Problem state space representation by a tree 3.6.3. State Vectors In some cases, states can be represented as vectors of attributes or features. This is common in problems where the state can be described by a fixed set of variables. State Vector: A tuple or array of values representing the state. Transitions: Functions that modify the state vector based on actions. 3.6.4. Matrix Representation For problems with a fixed, discrete state space, a matrix can be used to represent the states and transitions. Matrix: A 2D array where each cell represents a state. Transitions: Rules or functions that define how to move from one cell to another. 3.6.5. Symbolic Representation In some cases, states can be represented symbolically using logical expressions or other formal languages. States: Symbolic expressions or formulas. Transitions: Rules or axioms that define how to transform one expression into another. 3.6.6. Object-Oriented Representation In object-oriented programming, states can be represented as objects with attributes and methods. State Object: An object with attributes representing the state. Methods: Functions that define transitions between states. Pr. Meftah Boudjelal 29 4. Steps in State Space Search The following steps are often involved in the state space search process: Step 1: Define the State Space Determine the collection of all potential states and their interchanging states. To do this, the problem must be modelled in a fashion that encompasses all pertinent configurations and actions. Step 2: Pick a Search Strategy Decide how to comb over the state space. Typical tactics consist of: Before going on to nodes at the following depth level, the Breadth-First Search (BFS) method investigates every node at the current depth level. Full and ideal for graphs without weights. Depth-First Search (DFS) investigates a branch as far as it can go before turning around. less memory-intensive, although completeness and optimality are not assured. The best method for locating the lowest-cost solution is Uniform Cost Search (UCS), which expands the least expensive node first. Greedy Best-First Search expands the node that seems to be closest to the objective using a heuristic. A* Search Algorithm assures completeness and optimality with an admissible heuristic by combining the cost to reach the node with a heuristic calculating the cost to the target. Step 3: Start the Search Add the initial state to the frontier (the collection of states to be investigated) by starting there. Step 4: Extend the Nodes Using the selected search technique, iteratively expands nodes from the frontier, producing successor states and appending them to the frontier. After each node has been expanded, determine whether it now corresponds to the desired state. If so, retrace your route to the objective and call off the hunt. Step 5: Address State Repetition Put in place safeguards to prevent revisiting the same state, including keeping track of the states you’ve been to. Step 6: End the Search The search comes to an end when the desired state is discovered or, in the event that no viable solution is identified, when every state has been investigated. AI systems are able to tackle complicated issues in an organized and methodical manner by employing these methods to systematically explore the state space. Pr. Meftah Boudjelal 30 5. Measuring problem-solving performance The performance of the search algorithms is evaluated on the following four criteria. Completeness. An algorithm is said to be complete if it is guaranteed to find a goal state if one is reachable. We also call such algorithms as being systematic. By this we mean that the algorithm searches the entire space and returns fail only if a goal state is not reachable. Quality of solution (optimality). We may optionally specify a quality measure for the algorithm. We begin with the length of the path found as a measure of quality. Later, we will associate edge costs with each move, and then the total cost of the path will be the measure. Space complexity. This looks at the amount of space the algorithm requires to execute. We will see that this will be a critical measure, as the number of candidates in the search space often grows exponentially. Time complexity. This describes the amount of time needed by the algorithm, measured by the number of candidates inspected. The most desirable complexity will be linear in path length, but one will have to often contend with exponential complexity. Pr. Meftah Boudjelal 31 6. Types of search algorithms Search algorithms Ininformed Informed search search breadth first Hill Climbing search depth first Greedy search search............... 1. Uninformed search algorithm (Blind search): is a search that has no information about its domain or nature of the problem. 2. Informed search algorithm (Heuristic search): have further information about the cost of the path between any state in search space and the goal state. 6.1. Uninformed search algorithms Uninformed search algorithms are a fundamental class of search strategies in artificial intelligence (AI). These algorithms operate without any additional information about the problem domain beyond what is explicitly provided in the problem definition. With these approaches, nothing is presumed to be known about the state space. Instead, they rely only on the structure of the problem space and the goal state. The principal algorithms that fall under this heading are the depth first search (DFS) and Breadth first search (BFS). These algorithms share two properties: They do not use heuristic measures in which the search would proceed along the most promising path. Their aim is to find some solution to the given problem. Challenges and limitations Despite their simplicity and versatility, uninformed search algorithms face several challenges and limitations. Pr. Meftah Boudjelal 32 inefficient in complex problems with large search spaces, leading to an exponential increase in the number of states explored. They may also consume significant computational resources and memory. Lack of Optimality: Uninformed search algorithms do not guarantee an optimal solution, as they do not consider the cost of reaching the goal or other relevant information. Potential for Infinite Loops: Algorithms like DFS can become trapped in infinite loops if the search space is too deep or if there are cycles in the graph. 6.1.1. Breadth First Search (BFS) Breadth-first search (BFS) is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded. In BFS, nodes are visited level by level from the top of the tree to the bottom, in left to right. All nodes at level i are visited before any nodes at level i+1 are encountered. BFS is achieved very simply by using a FIFO queue for the frontier. Thus, new nodes (which are always deeper than their parents) go to the back of the queue, and old nodes, which are shallower than the new nodes, get expanded first. Breadth-first search Pseudocode function BREADTH-FIRST-SEARCH(problem) returns a solution, or failure node ←a node with STATE = problem.INITIAL-STATE, PATH-COST = 0 if problem.GOAL-TEST(node.STATE) then return SOLUTION(node) frontier ←a FIFO queue with node as the only element explored ←an empty set loop do if EMPTY?( frontier) then return failure node←POP( frontier ) add node.STATE to explored for each action in problem.ACTIONS(node.STATE) do child ←CHILD-NODE(problem, node, action) if child.STATE is not in explored or frontier then if problem.GOAL-TEST(child.STATE) then return SOLUTION(child ) frontier ←INSERT(child , frontier ) Figure bellow shows the progress of the search on a simple binary tree. Pr. Meftah Boudjelal 33 The Figure below shows Breadth-first traversal of a tree. The nodes will be visited in the order: A, B, C, D, E, F, G. How does breadth-first search rate according to the four criteria from the previous section? We can easily see that it is complete—if the shallowest goal node is at some finite depth d, breadth-first search will eventually find it after generating all shallower nodes (provided the branching factor b is finite). Note that as soon as a goal node is generated, we know it is the shallowest goal node because all shallower nodes must have been generated already and failed the goal test. Now, the shallowest goal node is not necessarily the optimal one; technically, breadth-first search is optimal if the path cost is a nondecreasing function of the depth of the node. The most common such scenario is that all actions have the same cost. So far, the news about breadth-first search has been good. The news about time and space is not so good. Imagine searching a uniform tree where every state has b successors. The root of the search tree generates b nodes at the first level, each of which generates b more nodes, for a total of 𝑏𝑏 2 at the second level. Each of these generates b more nodes, yielding 𝑏𝑏 3 nodes at the third level, and so on. Now suppose that the solution is at depth d. In the worst case, it is the last node generated at that level. Then the total number of nodes generated is 𝑏𝑏 + 𝑏𝑏 2 + 𝑏𝑏 3 + ⋯ + 𝑏𝑏 𝑑𝑑 = 𝑂𝑂(𝑏𝑏 𝑑𝑑 ) An exponential complexity bound such as 𝑂𝑂(𝑏𝑏 𝑑𝑑 ) is scary. Table shows why. It lists, for various values of the solution depth d, the time and memory required for a breadth first search with branching factor b = 10. The table assumes that 1 million nodes can be generated per second and that a node requires 1000 bytes of storage. Many search problems fit roughly within these assumptions (give or take a factor of 100) when run on a modern personal computer. Pr. Meftah Boudjelal 34 6.1.2 Depth First Search (DFS) Depth first search (DFS) attempts to plunge as deeply into a tree as quickly as possible. Whenever the search can make a choice, it selects the far left (or far right). A DFS plunges depth first into a graph without regard for which edge it takes next until it cannot go any further at which point it backtracks and continues. The progress of the search is illustrated in Figure bellow. The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors. As those nodes are expanded, they are dropped from the frontier, so then the search “backs up” to the next deepest node that still has unexplored successors. Pr. Meftah Boudjelal 35 As an example of DFS, consider the tree in the Figure below. Tree traversal algorithms often “visit” a node several times. The DFS encounters nodes in the following order: A, B, D, E, C, F, G. whereas breadth-first-search uses a FIFO queue, depth-first search uses a LIFO queue. A LIFO queue means that the most recently generated node is chosen for expansion. Depth-first search Pseudocode DFS-iterative (G, s): //Where G is graph and s is source vertex let S be stack S.push( s ) //Inserting s in stack mark s as visited. while ( S is not empty): //Pop a vertex from stack to visit next v = S.top( ) S.pop( ) //Push all the neighbours of v in stack that are not visited for all neighbours w of v in Graph G: if w is not visited : S.push( w ) mark w as visited DFS-recursive(G, s): mark s as visited for all neighbours w of s in Graph G: if w is not visited: DFS-recursive(G, w) Pr. Meftah Boudjelal 36 The properties of depth-first search depend strongly on whether the graph-search or tree-search version is used. The graph-search version, which avoids repeated states and redundant paths, is complete in finite state spaces because it will eventually expand every node. The tree-search version, on the other hand, is not complete. For similar reasons, both versions are nonoptimal. For example, in the first Figure, depth first search will explore the entire left subtree even if node C is a goal node. If node J were also a goal node, then depth-first search would return it as a solution instead of C, which would be a better solution; hence, depth-first search is not optimal. The time complexity of depth-first graph search is bounded by the size of the state space (which may be infinite, of course). A depth-first tree search, on the other hand, may generate all of the 𝑂𝑂(𝑏𝑏 𝑚𝑚 ) nodes in the search tree, where 𝑚𝑚 is the maximum depth of any node; this can be much greater than the size of the state space. Note that m itself can be much larger than d (the depth of the shallowest solution) and is infinite if the tree is unbounded. 6.1.3. Comparison between DFS and BFS The choice between DFS and BFS depends on the specific problem you are trying to solve. DFS is often used for tasks like cycle detection and topological sorting, while BFS is preferred for finding the shortest path in unweighted graphs and for pathfinding tasks in networks. Traversal Order DFS: Explores nodes as deep as possible along each branch before backtracking. BFS: Explores all neighboring nodes at the present depth level before moving on to nodes at the next depth level. Memory Usage DFS: Uses a stack (or recursion), which can be problematic for very deep graphs due to the depth of recursion. BFS: Uses a queue, which can be problematic for very wide graphs due to the width of the queue. Time Complexity DFS and BFS: Both algorithms have a time complexity of O(V+E), where V is the number of nodes and E is the number of edges. Applications DFS: o Cycle detection in a graph. o Topological sorting. o Traversing undirected graphs. Pr. Meftah Boudjelal 37 BFS: o Finding the shortest path in an unweighted graph. o Detecting connected components in an undirected graph. o Pathfinding algorithms in networks. Advantages and Disadvantages DFS: o Advantages: Less memory required for deep graphs. o Disadvantages: Can enter infinite loops if the graph contains cycles. BFS: o Advantages: Finds the shortest path in an unweighted graph. o Disadvantages: Requires more memory for wide graphs. 6.2. Informed search algorithms Informed search algorithms, also known as heuristic search algorithms, are a crucial component of artificial intelligence (AI). These algorithms leverage additional information, often in the form of heuristics, to guide the search process more efficiently towards a solution. Heuristic search algorithms use domain-specific knowledge to improve the efficiency of the search process. This additional information, typically provided by heuristic functions, helps the algorithm make educated guesses about which paths to explore, thereby reducing the time and computational resources required to find a solution. Heuristic search is useful in solving problem which: Could not be solved any other way Solution takes an infinite time or very long time to compute Definition Heuristic search methods are problem-specific techniques designed to find good solutions quickly by using rules of thumb or approximations. They aim to provide satisfactory solutions but do not guarantee optimality. Specificity Heuristic methods are tailored to specific problems or classes of problems. They exploit the particular characteristics and structures of these problems to find solutions efficiently. Objective The primary objective of heuristic search methods is to find acceptable solutions within a reasonable time frame. They focus on speed and efficiency rather than guaranteeing the best possible solution. Pr. Meftah Boudjelal 38 Heuristic search methods reduce greatly the number of nodes considered in order to reach a goal state. They are ideally suited for problems whose combinatorial complexity grows very quickly. Approach Heuristic methods use empirical rules, approximations, and domain-specific knowledge to guide the search process. They often make trade-offs between solution quality and computational effort. Examples Greedy Algorithms: Make locally optimal choices with the hope of finding a global optimum. Hill Climbing: Iteratively improves the solution by making small changes that increase the objective function value. Best-First Search: Explores the most promising nodes first based on a heuristic evaluation function. 6.2.1. Local search algorithms The search algorithms that we have seen before are designed to explore search spaces systematically. This systematicity is achieved by keeping one or more paths in memory and by recording which alternatives have been explored at each point along the path. When a goal is found, the path to that goal also constitutes a solution to the problem. In many problems, however, the path to the goal is irrelevant. If the path to the goal does not matter, we might consider a different class of algorithms, ones that do not worry about paths at all. Local search algorithms operate using a single current node (rather than multiple paths) and generally move only to neighbors of that node. Typically, the paths followed by the search are not retained. Although local search algorithms are not systematic, they have two key advantages: they use very little memory—usually a constant amount; and they can often find reasonable solutions in large or infinite (continuous) state spaces for which systematic algorithms are unsuitable. To understand local search, we find it useful to consider the state-space landscape (as in Figure below). A landscape has both “location” (defined by the state) and “elevation” (defined by the value of the heuristic cost function or objective function). If elevation corresponds to cost, then the aim is to find the lowest valley—a global minimum; if elevation corresponds to an objective function, then the aim is to find the highest peak—a global maximum. Local search algorithms explore this landscape. A complete local search algorithm always finds a goal if one exists; an optimal algorithm always finds a global minimum/maximum. Pr. Meftah Boudjelal 39 Global Maximum: It is the highest point on the hill, which is the goal state. Local Maximum: It is the peak higher than all other peaks but lower than the goal maximum. Flat Local Maximum: It is a region in the search space where the objective function reaches a maximum value and remains relatively constant over a range of input values. Shoulder (Plateau): It refers to a region in the search space where the objective function has a relatively flat gradient, meaning that small changes in the variables do not significantly affect the function's value. 6.2.2. Hill Climbing The hill-climbing search is an iterative algorithm that continually moves in the direction of increasing value. It terminates when it reaches a “peak” where no neighbor has a higher value. It starts with an arbitrary solution to a problem and iteratively makes small changes to improve the solution. It only considers the local neighborhood of the current solution. The algorithm does not maintain a search tree, so the data structure for the current node need only record the state and the value of the objective function. Hill climbing does not look ahead beyond the immediate neighbors of the current state. This resembles trying to find the top of Mount in a thick fog. Hill climbing is very useful in routing-related problems like travelling salesmen problem, job scheduling, chip designing, and portfolio management. Algorithm for Simple Hill Climbing: Step 1: Start with an initial state. Step 2: Check if the initial state is the goal. If so, return success and exit. Step 3: Enter a loop to search for a better state continuously. Pr. Meftah Boudjelal 40 Select a neighboring state within the loop by applying an operator to the current state. Evaluate this new state: o If it’s the goal state, return success and exit. o If it’s better than the current state, update the current state to this new state. o If it’s not better, discard it and continue the loop. Step 4: End the process if no better state is found and the goal isn’t achieved. Pseudocode for Hill Climbing function HillClimbing(initial_solution): current_solution = initial_solution while True: neighbors = generate_neighbors(current_solution) best_neighbor = current_solution for neighbor in neighbors: if evaluate(neighbor) > evaluate(best_neighbor): best_neighbor = neighbor if evaluate(best_neighbor) C -> U -> S) evaluates to 11. The potential problem with a greedy best-first search is revealed by the path (P -> R -> E -> S) having a cost of 10, which is lower than (P -> C -> U -> S). Greedy best-first search ignored this path because it does not consider the edge weights. Advantages of Greedy Best-First Search Simple and Easy to Implement: Greedy Best-First Search is a relatively straightforward algorithm, making it easy to implement. Pr. Meftah Boudjelal 46 Fast and Efficient: Greedy Best-First Search is a very fast algorithm, making it ideal for applications where speed is essential. Low Memory Requirements: Greedy Best-First Search requires only a small amount of memory, making it suitable for applications with limited memory. Flexible: Greedy Best-First Search can be adapted to different types of problems and can be easily extended to more complex problems. Efficiency: If the heuristic function used in Greedy Best-First Search is good to estimate, how close a node is to the solution, this algorithm can be a very efficient and find a solution quickly, even in large search spaces. Disadvantages of Greedy Best-First Search Inaccurate Results: Greedy Best-First Search is not always guaranteed to find the optimal solution, as it is only concerned with finding the most promising path. Local Optima: Greedy Best-First Search can get stuck in local optima, meaning that the path chosen may not be the best possible path. Heuristic Function: Greedy Best-First Search requires a heuristic function in order to work, which adds complexity to the algorithm. Lack of Completeness: Greedy Best-First Search is not a complete algorithm, meaning it may not always find a solution if one is exists. This can happen if the algorithm gets stuck in a cycle or if the search space is a too much complex. Pr. Meftah Boudjelal 47

Use Quizgecko on...
Browser
Browser