Solving Problems by Searching PDF

Summary

This document presents an overview of search algorithms used for solving problems in AI. It discusses various strategies for search, including uninformed methods like BFS, DFS, and informed methods like A* Search. The document also compares these strategies based on their completeness and optimality properties. Various examples, such as maze solving and the 8-puzzle, are used to illustrate the concepts.

Full Transcript

Solving problems by searching Contents What are State Uninforme Informed search Tree search space d search search problems? What are Search Problems? We will consider the problem of designing goal-based agents in k...

Solving problems by searching Contents What are State Uninforme Informed search Tree search space d search search problems? What are Search Problems? We will consider the problem of designing goal-based agents in known, fully observable, and deterministic environments. Example environment: Start Exit Remember: Goal-based Agent The agent has the task to reach a defined goal state. The performance measure is typically the cost to reach the goal. We will discuss a special type of goal-based agents called planning agents which use search algorithms to plan a sequence of actions that lead to the goal. Agent’s Maze location Map of the Search for a plan maze Exit location 𝑎𝑠 =argmin a∈ A [𝑐𝑜𝑠𝑡 ( 𝑠 , 𝑠1 , 𝑠2 ,… , 𝑠𝑛|𝑠 𝑛 ∈ 𝑆𝑔𝑜𝑎𝑙 ) ] Planning agents: Ask “what if” Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Must formulate a goal (test) Consider how the world WOULD BE Optimal vs. complete planning Planning vs. replanning What are Search Problems? For now, we consider only a discrete Initial state environment using an atomic state representation (states are just 1 labeled 1, 2, 3, …). The state space is the set of all possible states of the environment and some states are marked as goal states. The optimal solution is the sequence of actions (or equivalently a sequence z Goal state of states) that gives the lowest path Phases: cost for reaching the goal. 1) Search/Planning: the process of looking for the sequence of actions that reaches a goal state. Requires that the agent knows what happens when it moves! 2) Execution: Once the agent begins executing the search solution in a deterministic, known environment, it can ignore its percepts (open- Definition of a Search problem Initial state Actions: {N, E, S, W} Transitio Initial state: state description 1 ns g i Actions: set of possible actions 4 Transition model: a function that defines the new state resulting from performing an a action in the current state Goal state: state description Path cost: the sum of step Goal z costs state Discretization grid Important: The state space is typically too large to be enumerated or it is continuous. Therefore, the problem is defined by initial state, actions and the transition model and not the set of all possible states. Search Problems  A search problem consists of:  A state space  A successor function “N”, 1.0 (transition function) - with actions, costs “E”, 1.0  A start state and a goal test  A solution is a sequence of actions (a plan) which transforms the start state to a goal state Transition Function and Available Actions As an action schema: Original PRECOND: no wall in direction EFFECT: change the agent’s location according to Description Initial state Actions: {N, E, S, As a function: or W} Transitio 1 ns g Function implemented 2 i 3 as a table representing 4a the state space 1 S 2 5 as a graph. 2 N 1 2 S 3 … … … 4 E a 4 S 5 4 N 3 … … … z Discretization grid Goal Available actions in a state come from the state transition function. E.g., Note: Known and deterministic is a property of the transition Example: Traveling in Romania  State space:  Cities  Successor function:  Roads: Go to adjacent city with cost = distance  Start state:  Arad  Goal test:  Is state == Bucharest?  Solution? Original Description Example: Romania Vacation On vacation in Romania; currently in Arad Flight leaves tomorrow from Bucharest Initial state: State Space/Transition model Arad Defined as a graph Actions: Drive from one city to another. Transition model and states: If you go from city A to city B, you end up in city B. Goal state: Bucharest Path cost: Sum of edge costs. Distance in miles Example: Vacuum world State Space Goal states Initial State: Defined by agent location and dirt location. Actions: Left, right, suck There are 8 Transition model: Clean a location or move.possible atomic Goal state: All locations are clean. states of the system. Path cost: E.g., number if actions Why is the number of states for n possible Example: Sliding-tile Puzzle Initial State: A given configuration. Actions: Move blank left, right, up, down Transition model: Move a tile Goal state: Tiles are arranged empty and 1-8 in order Path cost: 1 per tile move. State space size Each state describes the location of each tile (including the empty one). ½ of the permutations are unreachable. 8-puzzle: states 15-puzzle: states Example: Robot Motion Planning Initial State: Current arm position. States: Real-valued coordinates of robot joint angles. Actions: Continuous motions of robot joints. Goal state: Desired final configuration (e.g., object is grasped). Path cost: Time to execute, smoothness of path, etc. Given a search problem definition Initial state Solving Search Problems Actions Transition model Goal state Path cost Construct How do we find a search the optimal tree for solution the state (sequence of space actions/states)? graph! Initial state State space Goal states Issue: Transition Model is Not a Tree! It can have Redundant Paths Cycles Return to the same state. The search tree will create a new node! Initial state Non-cycle redundant paths Multiple paths to get to the same state Goal states Initial state Path 1 Path 2 Goal states Search Tree Superimpose a “what if” tree of Root node possible actions and outcomes (states) = Initial on the state space graph. The Root node represents the initial state a state. Edge = Action An action child node is reached by an edge representing an action. The Child Non-cycle corresponding state is defined by the redundant transition model. node b c Trees cannot have cycles (loops) or path multiple paths to the same state. These are called redundant paths. Cycles in the search space must be broken to prevent infinite loops. d e e Removing other redundant paths …… … improves search efficiency. Cycle A path through the tree corresponds to a sequence of actions (states). b A solution is a path ending in a node Solution representing a goal state. Nodes vs. states: Each tree node path f Node representing represents a state of the system. If redundant path cannot be prevented a Goal state then state can be represented by multiple nodes. Tree Search Algorithm Outline 1. Initialize the frontier (set of unexplored know nodes) using the starting state/root node. 2. While the frontier is not empty: a) Choose next frontier node to expand according to search strategy. b) If the node represents a goal state, return it as the solution. c) Else expand the node (i.e., apply all possible actions to the transition model) and add its children nodes representing the newly reached states to the frontier. Tree Search Example Frontier Transition model Tree Search Example 1. Expand Arad Frontier Transition model Tree Search Example Frontier 2. Expand Sibiu Transition model Example of a cycle Frontier – the part of the tree we have constructed so far. Partial plans under consideration We could have also expanded Timisoara or Zerind! General Tree Search  Important ideas:  Frontier  Expansion  Exploration strategy  Main question: which frontier nodes to explore? Search Strategies: Properties A search strategy is defined by picking the order of node expansion. Strategies are evaluated along the following dimensions: Completeness: does it always find a solution if one exists? Optimality: does it always find a least-cost solution? Time complexity: how long does it take? Space complexity: how much memory does it need? Search Strategies: Time and Space Complexity A search strategy is defined by picking the order of node expansion. Worst case time and space complexity are measured in terms of the size of the state space n (= number of nodes in the search tree). Often used metrics if the state space is only implicitly defined by initial state, actions and a transition function are: : maximum branching factor of the search tree (number of available actions). : length of the longest path (loops need to be removed). : depth of the optimal solution. State Space for Search State Space Number of different states the agent and environment State can be in. representation Reachable states are defined by the initial state and the transition model. Not all states may be reachable 𝑥1 from the initial state. 𝑥2 … Search tree spans the state space. Note that a single state can be represented by several search tree nodes if we have redundant paths. State space size is an indication of problem size. State Space Size Estimation The state consists of Even if the used algorithm represents the state space variables using atomic states, we may know that internally they called fluents have a factored representation that can be used to that estimate the problem size. The basic rule to calculate (estimate) the state space represent size for factored state representation with fluents conditions (variables) is: that can change over time. where is the number of possible values. Uninformed Search Breadth-first search Uniform-cost search Depth-first search Iterative deepening Uninformed Search Strategies search The search algorithm/agent is not provided information about how close a state is to the goal state. It just has the labels of the atomic states and the transition function. It blindly searches following a simple strategy until it finds the goal state by chance. Search strategies we will discuss: Breadth-First Search (BFS) Expansion rule: Expand shallowest unexpanded node in the frontier (=FIFO). Data Structures Frontier data structure: holds references to the green nodes (green) and is implemented as a FIFO queue. Reached data structure: holds references to all visited nodes (gray and green) and is used to prevent visiting nodes more than once (redundant path checking). Builds a tree with links between parent and child. Breadth-First Search Strategy: expand a G a shallowest node b c first e d f Implementation: S h Fringe is a FIFO p q r queue S d e p Search b c e h r q Tiers a a h r p q f p q f q c G q c G a a Implementation: BFS Expand adds the next level below node to the frontier. reached makes sure we do not visit nodes twice (e.g., in a cycle or other redundant path). Fast lookup is important. Implementation: Expanding the Search Tree AI tree search creates the search tree while searching. The EXPAND function tries all available actions in the current node using the transition function (RESULTS). It returns a list of new nodes for the frontier. Node structure for the search tree. Transitio Yield can also be n implemented by function returning a list of Nodes. Time and Space Complexity d: depth of the optimal solution m: max. depth of tree Breadth-First Search b: maximum branching factor A expanded d=1 b=2 m=3 B C Goal D E F G E C Goal All paths to the depth of the goal are expanded: Properties of Breadth-First Search d: depth of the optimal Complete? solution Yes m: max. depth of tree b: maximum branching factor Optimal? Yes – if cost is the same per step (action). Otherwise: Use uniform-cost search. Time? Number of nodes created: Space? Stored nodes: Note: The large space complexity is usually a bigger problem than time! Uniform-cost Search (= Dijkstra’s Shortest Path Algorithm) Expansion rule: Expand node in the frontier with the least path cost from the initial state. Implementation: best-first search where the frontier is a priority queue ordered by lower = path cost (cost of all actions starting from the initial state). Breadth-first search is a special case when all step costs being equal, i.e., each action costs the same! d: depth of the optimal Complete? solution Yes, if all step cost is greater than some small positive constant ε m: > 0 max. depth of tree b: maximum branching factor Optimal? Yes – nodes expanded in increasing order of path cost Time? Number of nodes with path cost ≤ cost of optimal solution (C*) is O(b1+C*/ ε). This can be greater than O(bd): the search can explore long paths consisting of small steps before exploring shorter paths consisting of larger steps Space? O(b1+C*/ ε) See Dijkstra's algorithm on Wikipedia Implementation: Best-First Search Strategy The order for expanding the frontier is determined by f(n) = path cost from the initial state to node n. This check is the difference to BFS! It See BFS for function EXPAND. visits a node again if it can be reached by a better (cheaper) path. Depth- First Search (DFS) Expansion rule: Expand deepest unexpanded node in the frontier (last added). Frontier: stack (LIFO) No reached data structure! Cycle checking checks only the current path. Redundant paths can not be identified and lead to replicated work. Implementation: DFS DFS could be implemented like BFS/Best-first search and just taking the last element from the frontier (LIFO). However, to reduce the space complexity to , the reached data structure needs to be removed! Options: Recursive implementation (cycle checking is a problem leading to infinite loops) Iterative implementation: Build tree and abandoned branches are removed from memory. Cycle checking is only done against the current path. This is similar to DFS uses ℓ Backtracking search. If we only keep the current path from the root to the current node in memory, then we can only check against that path to prevent cycles, but we cannot prevent other redundant paths. We also need to make sure the frontier does not contain the same state more then once! See BFS for function EXPAND. Time and Space Complexity d: depth of the optimal solution m: max. depth of tree Depth-First Search b: maximum branching factor A b=2 d= 1 B C Goal m=3 D E H I Goal DFS finds this goal first  Not optimal! Time: – worst case is expanding all paths. Space: - if it only stores the frontier nodes and the current path. Properties of Depth-First Search Complete? Only in finite search spaces. Cycles can be avoided by checking for repeated states along the path. Incomplete in infinite search spaces (e.g., with cycles). d: depth of the optimal Optimal? solution No – returns the first solution it finds. m: max. depth of tree b: maximum branching factor Time? The worst case is to reach a solution at maximum depth m in the last path: Terrible compared to BFS if Space? is linear in max. tree depth which is very good but only achieved if no reached data structure and memory management (forget old branches) is used! Cycles can be broken but redundant paths cannot be checked. Iterative Deepening Search (IDS) Can we a) get DFS’s good memory footprint, b) avoid infinite cycles, and c) preserve BFS’s optimality guaranty? Use depth-restricted DFS and gradually increase the depth. 1. Check if the root node is the goal. 2. Do a DFS searching for a path of length 1 3. If goal not found, do a DFS searching for a path of length 2 4. If goal not found, do a DFS searching for a path of length 3 5. … Iterative Deepeni ng Search (IDS) Implementation: IDS See BFS for function EXPAND. Properties of Iterative Deepening Search Complete? d: depth of the optimal Yes solution m: max. depth of tree b: maximum branching Optimal? factor Yes, if step cost = 1 (like BFS) Time? Consists of rebuilding trees up to times  Slower than BFS, but the same complexity class! Space? O(bd)  linear space. Even less than DFS since. Cycles need to be handled by the depth-limited DFS implementation. Note: IDS produces the same result as BFS but trades better space complexity for worse runThis time.makes IDS/DFS the workhorse of AI. Informed Search Informed Search AI search problems typically have a very large search space. We would like to improve efficiency by expanding as few nodes as possible. The agent can use additional information in the form of “hints” about what promising states are to explore first. These hints are derived from information the agent has (e.g., a map with the goal location marked) or percepts coming from a sensor (e.g., a GPS sensor and coordinates of the goal). The agent uses a heuristic function to rank nodes in the frontier and always select the most promising node in the frontier for expansion using the best-first search strategy. Algorithms: Greedy best-first search A* search Heuristic Function Heuristic function estimates the cost of reaching a node representing the goal state from the current node. Examples: Euclidean distance Manhattan distance Start state Start state Goal state Goal state Heuristic for the Romania Problem Estimate the driving distance from Arad to Bucharest using a straight-line distance. h(n) 36 6 Greedy Best-First Search Example Expansion rule: Expand the node that has the lowest value h(n)= of the heuristic function h(n) Greedy Best-First Search Example Greedy Best-First Search Example Greedy Best-First Search Example Total: 140 + 99 + 211 = 450 miles Properties of Greedy Best-First Search Complete? Yes – Best-first search if complete in finite spaces. Optimal? No Total: 140 + 99 + 211 = 450 miles Alternative through Rimnicu Vilcea: 140 + 80 + 97 + 101 = 418 miles Best- Implementation of Greedy Best-First First search Search Implementation of Greedy Best-First Search Heuristic so we expand the node with the lowest estimated cost The order for expanding the frontier is determined by f(n) This check is the different to BFS! It See BFS for function EXPAND. visits a node again if it can be reached by a better (cheaper) path. Properties of Greedy Best-First Search Complete? Yes – Best-first search if complete in finite spaces. Optimal? d: depth of the optimal solution No m: max. depth of tree b: maximum branching factor Time? Worst case: O(bm)  like DFS Best case: O(bm) – If is 100% accurate we only expand a single path. Space? Same as time complexity. The Optimality Problem of Greedy Best-First search Greedy best-first search only considers the estimated cost to the goal. is always better than. Greedy best-first will go this way and never reconsider! A* Search n’ =3 n Idea: Take the cost of the path to called into account to avoid expanding paths that are already very expensive. The evaluation function is the estimated total cost of the path through node to the goal: : cost so far to reach n (path cost) : estimated cost from n to goal (heuristic) The agent in the example above will stop at n with and chose the path up with a better Note: For greedy best-first search we just used A* Search Example 𝑓 ( 𝑛 ) =𝑔 (𝑛)+ℎ(𝑛)=¿ Expansion rule: Expand the node with the smallest f(n) ℎ(𝑛) A* Search Example 𝑓 ( 𝑛 ) =𝑔 ( 𝑛 ) +ℎ(𝑛) ℎ(𝑛) A* Search Example 𝑓 ( 𝑛 ) =𝑔 ( 𝑛 ) +ℎ(𝑛) ℎ(𝑛) A* Search Example 𝑓 ( 𝑛 ) =𝑔 ( 𝑛 ) +ℎ(𝑛) ℎ(𝑛) A* Search Example 𝑓 ( 𝑛 ) =𝑔 ( 𝑛 ) +ℎ(𝑛) ℎ(𝑛) A* Search Example 𝑓 ( 𝑛 ) =𝑔 ( 𝑛 ) +ℎ(𝑛) ℎ(𝑛) BFS vs. A* Search BFS A* A* Source: Wikipedia Implementation of A* Search Path cost to + heuristic from to goal = estimate of the total cost The order for expanding the frontier is determined by This check is different to BFS! It visits a node again if it can be See BFS for function EXPAND. reached by a better (cheaper) redundant path. Optimality: Admissible Heuristics Definition: A heuristic is admissible if for every node , , where is the true cost to reach the goal state from. I.e., an admissible heuristic is a lower bound and never overestimates the true cost to reach the goal. Example: straight line distance never overestimates the actual road distance. Theorem: If is admissible, A* is optimal. Proof of Optimality of A* 𝑓 (𝑛)=𝑔 (𝑛)+ℎ(𝑛) For goal 𝑓 (𝑛 ∗ )= 𝑔(𝑛)+0 states: n* (goal) Any unexplored node has: ∗ ∗ 𝐶 = 𝑓 ( 𝑛 )=𝑔( 𝑛 )+0 ∗ n ∗ 𝑓 ( 𝑛 ) ≥ 𝑓 (𝑛 ) n’ (other goal) 𝑔 ( 𝑛′) ≥ 𝑓 ( 𝑛) ⟺ 𝑔 (𝑛′ )≥ 𝐶∗ Suppose A* terminates its search at goal at a cost of. All unexplored nodes have or they would have been explored before Since is an optimistic estimate, it is impossible for to have a successor goal state with. This proofs that must be an optimal solution. Guarantees of A* A* is optimally efficient a. No other tree-based search algorithm that uses the same heuristic can expand fewer nodes and still be guaranteed to find the optimal solution. b. Any algorithm that does not expand all nodes with(the lowest cost of going to a goal node) cannot be optimal. It risks missing the optimal solution. Properties of A* Complete? Yes Optimal? Yes Time? Number of nodes for which (exponential) Space? Same as time complexity. This is often too much unless a very good heuristic is know. Designing Heuristic Functions Heuristics for the 8-puzzle = number of misplaced tiles = total Manhattan distance (number of squares from desired location of each tile) 1 needs to move 3 Are and admissible? positions Heuristics from Relaxed Problems A problem with fewer restrictions on the actions is called a relaxed problem. The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem. I.e., the true cost is never smaller. What relaxation is used by and ? : If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then gives the shortest solution. : If the rules are relaxed so that a tile can move to any adjacent square, then gives the shortest solution. Heuristics from Relaxed Problems What relaxations are used in these two cases? Euclidean distance Manhattan distance Start state Start state Goal state Goal state Heuristics from Subproblems Let be the cost of getting a subset of tiles (say, 1,2,3,4) into their correct positions. The final order of the * tiles does not matter. Small subproblems are often easy to solve. Can precompute and save the exact solution cost for every or many possible subproblem instances – pattern database. * * * * * * * * Dominance: What Heuristic is Better? Definition: If and are both admissible heuristics and for all , then dominates Is or better for A* search? A* search expands every node with  is never smaller than. A* search with will expand less nodes and is therefore better. Example: Effect of Information in Search Typical search costs for the 8-puzzle Solution at depth IDS = 3,644,035 nodes A*() = 227 nodes A*() = 73 nodes Solution at depth IDS ≈ 54,000,000,000 nodes A*() = 39,135 nodes A*() = 1,641 nodes Combining Heuristics Suppose we have a collection of admissible heuristics but none of them dominates the others. Combining them is easy: That is, always pick for each node the heuristic that is closest to the real cost to the goal. Satisficing Search: Weighted A* Search Often it is sufficient to find a “good enough” solution if it can be found very quickly or with way less computational resources. I.e., expanding fewer nodes. We could use inadmissible heuristics in A* search (e.g., by multiplying with a factor ) that sometimes overestimate the optimal cost to the goal slightly. 1. It potentially reduces the number of expanded nodes significantly. 2. This will break the algorithm’s optimality guaranty! Weighted A* search: The presented algorithms are special cases: A* search: Uniform cost search/BFS: Greedy best-first search: Example of Weighted A* Search Reduction in the number of expanded nodes Breadth-first Search (BFS) Exact A* Search Weighted A* Search # actions to reach n Source and Animation: Wikipedia Implementation as Best-First Search All discussed search strategies can be implemented using Best-first search. Best-first search expands always the node with the minimum value of an evaluation function. Search Strategy Evaluation function BFS (Breadth-first (=uniform path cost) search) Uniform-cost Search (=path cost) DFS/IDS (see note below!) Greedy Best-first Important note: Do not implement DFS/IDS using Best-first Search! Search You will get the poor space complexity and the disadvantages of DFS (not optimal and worse time complexity) (weighted) A* Search Summary: Uninformed Search Strategies Time Space Algorithm Complete? Optimal? complexit complexit y y Yes If all step 𝑂 (𝑏 𝑑) 𝑂 (𝑏 𝑑) BFS costs are equal (Breadth- first Yes Yes Number of nodes with search) Uniform- In finite spaces cost No 𝑂(𝑏𝑚) 𝑂 (𝑏𝑚) Search (cycle checking) If all step Yes 𝑂 (𝑏 𝑑) 𝑂 (𝑏𝑑) DFS costs are equal IDS b: maximum branching factor of the search tree d: depth of the optimal solution m: maximum length of any path in the Summary: All Search Strategies Time Space Algorithm Complete? Optimal? complexity complexity BFS If all step Yes 𝑂 (𝑏 𝑑) 𝑂 (𝑏 𝑑) (Breadth- costs are equal first search) Yes Yes Number of nodes with Uniform- cost Search In finite spaces No 𝑂(𝑏𝑚) 𝑂 (𝑏𝑚) (cycles DFS checking) If all step Yes 𝑂 (𝑏 𝑑) 𝑂 (𝑏𝑑) costs are equal IDS In finite spaces Depends on Worst case: No heuristic Best case: Greedy (cycles checking) Number of nodes with best-first Yes Yes Search Planning vs. Execution Phase 1. Planning is done by a planning function using search. The result is a plan. 2. The plan can be executed by a model-based agent function. The used model is the plan + a step counter so the agent function can follow the plan. Planning Execution function at step 2 in function the plan Agent Step Pla … 1 n S Agent’s State 2 S (= program counter) 3 S 4 E Step 2 … … Note: The agent does not use percepts or the transition function. It blindly follows the plan. Caution: This only works in an environment with deterministic transitions. Complete Planning Agent for a Maze- Solving Agent Map = Transition function + initial and goal state Physical Sensor input Planning Senso agent function Ne n or rs pla lan ed rep has an event percepts Environm to loop: Read State ent sensors Agent Call agent Pla function function Physical Execute n action Maze n e action in pla th w the llo physical Fo Counter environme in plan Actuato Execute nt Repeat rs action in the physical environme The event loop calls the agent function for the nextnt action. The agent function follows the plan or calls the planning function if there is no plan yet or it thinks the current plan does not work based on the percepts (replanning). Conclusion Tree search can be used for planning actions for goal-based agents in known, fully observable and deterministic environments. Issues are: The large search space typically does not fit into memory. We use a transition function as a compact description of the transition model. The search tree is built on the fly, and we have to deal with cycles, redundant paths, and memory management. IDS is a memory efficient method used often in AI for uninformed search. Informed search uses heuristics based on knowledge or percepts to improve search performance (i.e., A* expand fewer nodes than BFS).

Use Quizgecko on...
Browser
Browser