AI Lecture Notes on Uninformed Search PDF
Document Details
Uploaded by JudiciousPyramidsOfGiza9817
Prof. Ahmed Kafafy
Tags
Summary
These lecture notes cover uninformed search algorithms in artificial intelligence. They detail problem-solving agents, problem definitions, and various search strategies. The document also includes examples like the 8-puzzle and the Tower of Hanoi.
Full Transcript
Solving problems by searching Chapter 3 Prof. Ahmed Kafafy 1 Outline l Problem-solving agents l Problem types l Problem formulation l Example problems l Basic search algorithms l Uninformed search...
Solving problems by searching Chapter 3 Prof. Ahmed Kafafy 1 Outline l Problem-solving agents l Problem types l Problem formulation l Example problems l Basic search algorithms l Uninformed search l informed search Problem-solving agents Problem-solving agents l Problem solving is fundamental to many AI-based applications. l Problem solving is a process of generating solutions from observed or given data. l A problem space encompasses all valid states that can be generated by the application of any combination of operators on any combination of objects l Solution is a combination of operations and objects that achieve the goals. l Search refers to the search for a solution in a problem space. Problem Definitions l Define a state space that contains all the possible configurations of the relevant objects. l Specify one or more states from which the problem solving process may start. (initial states). l Specify one or more states that would be acceptable solution to the problem. (goal states). l Specify a set of rules that describe the actions (operators) available. l Search process is to use rules, in combination with an appropriate control strategy, to move through the l problem space until a path from an initial state to a goal state is found Problem Space l A problem space is represented by directed graph, where nodes represent search state and paths represent the operators applied to change the state. l To simplify a search algorithms, it is often convenient to logically represent a problem space as a tree. State Change l A State space is the set of all states reachable from the initial state. l The successor function moves one state to another state. l Successor Function : Ø It is a description of possible actions; a set of operators. Ø It is a transformation function on a state representation, which converts that state into another state. Models To Be Studied State-based Models – Model task as a graph of all possible states l Called a “state-space graph” – A state captures all the relevant information about the past in order to act (optimally) in the future – Actions correspond to transitions from one state to another – Solutions are defined as a sequence of steps/actions (i.e., a path in the graph) Building Goal-Based Agents What are the key questions that need to be addressed? l What goal does the agent need to achieve? l What knowledge does the agent need? l What actions does the agent need to do? 9 Many AI (and non-AI) Tasks can be Formulated as Search Problems Goal is to find a sequence of actions l Puzzles l Games l Navigation l Assignment l Motion planning l Scheduling l Routing Example: tower of Hanoi l This puzzle may involves a set of rings of different sizes that can be placed on three different pegs. l The puzzle starts with the rings arranged as in (a) l The goal is to move them all as to Fig. (b) l Condition : Only the top ring on a peg can be moved, and it may only be placed on a smaller ring, or on an empty peg. Search Example: Route Finding Actions: go straight, turn left, turn right Goal: shortest? fastest? most scenic? Example: Romania Search Example: River Crossing Problem Goal: All on right side of river Rules: 1) Farmer must row the boat 2) Only room for one other Actions: F>, F, FC, Dog bites sheep FD, FS< Sheep eats cabbage Search Example: 8-Puzzle Actions: move Blank Goal: reach a certain configuration Condition: the move is within the board Transformation: blank moves Left, Right, Up, Dn Solution : optimal sequence of operators Search Example: Water Jugs Problem Given 4-liter and 3-liter pitchers, how do you get exactly 2 liters into the 4-liter pitcher? 4 3 Search Example: Robot Motion Planning Actions: translate and rotate joints Goal: fastest? most energy efficient? safest? Search Example: Natural Language Translation Italian à English: la casa blu à the blue house Actions: translate single words (e.g., la à the) Goal: fluent English? preserves meaning? Search Example: 8-Queens l State space : configurations n = 8 l Initial state : configuration without queens on the board l Goal state : n = 8 queen such that no queen attacks any other l Operators : place a queen on the board. l Condition: the new queen is not attacked by any other already placed Search Example: Remove 5 Sticks Problem Remove exactly 5 of the 17 sticks so the resulting figure forms exactly 3 squares Basic Search Task Assumptions (usually, though not games) l Fully observable l Deterministic l Static l Discrete l Single agent l Solution is a sequence of actions What Knowledge does the Agent Need? l The information needs to be – sufficient to describe all relevant aspects for reaching the goal – adequate to describe the world state / situation l Fully observable assumption, also known as the closed world assumption, means – All necessary information about a problem domain is accessible so that each state is a complete description of the world; there is no missing information at any point in time 22 How should the Environment be Represented? l Knowledge representation problem: – What information from the sensors is relevant? – How to represent domain knowledge? l Determining what to represent is difficult and is usually left to the system designer to specify l Problem State = representation of all necessary information about the environment l State Space (aka Problem Space) = all possible valid configurations of the environment 23 What Goal does the Agent want to Achieve? l How do you describe the goal? – as a task to be accomplished – as a state to be reached – as a set of properties to be satisfied l How do you know when the goal is reached? – with a goal test that defines what it means to have achieved/satisfied the goal – or, with a set of goal states l Determining the goal is usually left to the system designer or user to specify 24 What Actions does the Agent Need? l Discrete and Deterministic task assumptions imply l Given: – an action (aka operator or move) – a description of the current state of the world l Action completely specifies: – if that action can be applied (i.e., legal) – what the exact state of the world will be after the action is performed in the current state (no "history" information needed to compute the successor state) 25 What Actions does the Agent Need? l A finite set of actions/operators needs to be – decomposed into atomic steps that are discrete and indivisible, and therefore can be treated as instantaneous – sufficient to describe all necessary changes l The number of actions needed depends on how the world states are represented 26 Single-state problem formulation A problem is defined by five items: 1. States cities in Romania 2. Initial state :"at Arad" 3. Actions or successor function S(x) = set of action– state pairs e.g., S(Arad) = {, … }. 4. Goal test:, can be Ø explicit, e.g., x = "at Bucharest" Ø implicit, e.g., Checkmate(x) 5. Path cost (additive) l e.g., sum of distances, number of actions executed. l c(x,a,y) is the step cost, assumed to be ≥ 0 l A solution: is a sequence of actions leading from the initial state to a goal state 27 Search Example: 8-Puzzle l States = configurations l Actions = up to 4 kinds of moves: up, down, left, right Water Jugs Problem Given 4-liter and 3-liter pitchers, how do you get exactly 2 liters into the 4-liter pitcher? State: (x, y) for # liters4in 4-liter and 3-liter pitchers, 3 respectively Actions: empty, fill, pour water between pitchers Constraints : 0 ≤ J3≤ 3, & 0 ≤ J4≤ 4 Initial state: (0, 0) Goal state: (2, *) Action / Successor Functions 1. (x, y | x < 4) (4, y) “Fill 4” 2. (x, y | y < 3) (x, 3) “Fill 3” 3. (x, y | x > 0) (0, y) “Empty 4” 4. (x, y | y > 0) (x, 0) “Empty 3” 5. (x, y | x+y ≥ 4 and y > 0) (4, y - (4 - x)) “Pour from 3 to 4 until 4 is full” 6. (x, y | x+y ≥ 3 and x > 0) (x - (3 - y), 3) “Pour from 4 to 3 until 3 is full” 7. (x, y | x+y ≤ 4 and y > 0) (x+y, 0) “Pour all water from 3 to 4” Formalizing Search in a State Space l A state space is a directed graph: (V, E) – V is a set of nodes (vertices) – E is a set of arcs (edges) each arc is directed from one node to another node l Each node is a data structure that contains: – a state description – other information such as: l link to parent node l name of action that generated this node (from its parent) l other bookkeeping data 31 Formalizing Search in a State Space l Each arc corresponds to one of the finite number of actions: – when the action is applied to the state associated with the arc's source node – then the resulting state is the state associated with the arc's destination node l Each arc has a fixed, positive cost: – corresponds to the cost of the action 32 Formalizing Search in a State Space l Each node has a finite set of successor nodes: – corresponds to all of the legal actions that can be applied at the source node's state l Expanding a node means: – generate all of the successor nodes – add them and their associated arcs to the state- space search tree 33 Formalizing Search in a State Space l One or more nodes are designated as start nodes l A goal test is applied to a node's state to determine if it is a goal node l A solution is a sequence of actions associated with a path in the state space from a start to a goal node: – just the goal state (e.g., cryptarithmetic) – a path from start to goal state (e.g., 8-puzzle) l The cost of a solution is the sum of the arc costs on the solution path 34 Search Summary q Solution is an ordered sequence of primitive actions (steps) q f(x) = a1, a2, …, an where x is the input q Model task as a graph of all possible states and actions, and a solution as a path q A state captures all the relevant information about the past Vacuum world state space graph l states? integer dirt and robot location l initial state? l actions? Left , Right , Suck l goal test? no dirt at all locations l path cost? 1 per action Example:The 8-puzzle l states? locations of tiles l initial state? given l actions? move blank left, right, up, down l goal test? = goal state (given) l path cost? 1 per move Real-World Search Problems l Route Finding: (computer networks, airline travel planning system,... ) l Travelling Salesman Optimization Problem (package delivery, automatic drills,... ) l Layout Problems: (furniture layout, packaging,... ) l Assembly Sequencing: (assembly of electric motors,... ) l Task Scheduling: (manufacturing, timetables Problem Solution l Problems whose solution is a description of how to reach a goal state from the initial state: v n-puzzle v route-finding problem v assembly sequencing l Problems whose solution is simply a description of the goal state itself: Ø 8-queen problem Ø scheduling problems Ø layout problems From Search Graphs to Search Trees l From Search Graphs to Search Trees l The set of all possible paths of a graph can be represented as a tree. l A tree is a directed acyclic graph all of whose nodes have at most one parent. Ø A root of a tree is a node with no parents. Ø A leaf is a node with no children. Ø The branching factor of a node is the number of its children. Ø Graphs can be turned into trees by duplicating Ø nodes and breaking cyclic paths, if any. From graph to tree l To unravel a graph into a tree choose a root node and trace every path from that node until you reach a leaf node or a node already in that path. Implementation: states vs. nodes l A state is a (representation of) a physical configuration l A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x) , depth l The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states. Search strategies l A search strategy is defined by picking the order of node expansion l Strategies are evaluated along the following dimensions: Ø completeness: does it always find a solution if one exists? Ø time complexity: number of nodes generated Ø space complexity: maximum number of nodes in memory Ø optimality: does it always find a least-cost solution? l Time and space complexity are measured in terms of b: maximum branching factor of the search tree d: depth of the least-cost solution m:maximum depth of the state space (may be ∞) Search algorithms classifications Uninformed search strategies 45 l Uninformed search strategies use only the information available in the problem definition (also called blind search). The term means that they have no additional information about states beyond that provided in the problem definition. q Breadth-first search q Uniform-cost search q Depth-first search q Depth-limited search q Iterative deepening search q Bidirectional search Formalizing Search F C D S A search problem has five components: , I, G, actions, cost 1. State space : all valid configurations ? 2. Initial states I : a set of start states I = {(FCDS,)} 3. Goal states G : a set of goal states G = {(,FCDS)} 4. An action function successors(s) : states reachable in one step (one arc) from s successors((FCDS,)) = {(CD,FS)} successors((CDF,S)) = {(CD,FS), (D,FCS), (C,FSD)} 5. A cost function cost(s, s’ ): The cost of moving from s to s’ l The goal of search is to find a solution path from a state in I to a state in G State Space Search Assumptions l User defines what knowledge is encoded in a state l Closed World Assumption – All necessary information about the problem domain is available in state description – No incomplete or noisy data l Each action specifies a primitive, deterministic, discrete event that includes information on when that operator can be legally used, and exactly what the state of the world will be after it is applied State Space = A Directed Graph F C D S D, CFS DFS, C CSDF, CD,SF CDF, S S, CFD SF, CD , CSDF Start C, DSF CSF, D Goal l In general, there will be many generated, but un- expanded, states at any given time during a search l One has to choose which one to “expand” next Different Search Strategies l The generated, but not yet expanded, states define the Frontier (aka Open or Fringe) set l The essential difference is, which state in the Frontier to expand next? D, CFS DFS, C CSDF, CD,SF CDF, S S, CFD SF, CD , CSDF Start C, DSF CSF, D Goal Tree search algorithms l Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states) n Fringe is s list of nodes that wait to be expanded. It is also called Frontier n The fringe is a sorted list usually in the form of a queue. n The strategy by which the fringe is sorted determines how the tree will be searched. Formalizing Search in a State Space State-space search is the process of searching through a state space for a solution by making explicit a sufficient portion of an implicit state-space graph, in the form of a search tree, to include a goal node: TREE SEARCH Algorithm: Frontier = {S}, where S is the start node Loop do called if Frontier is empty then return failure “expanding” pick a node, n, from Frontier node n if n is a goal node then return solution Generate all n’s successor nodes and add them all to Frontier Remove n from Frontier 51 Formalizing Search in a State Space l This algorithm does NOT detect goal when node is generated l This algorithm does NOT detect loops (i.e., repeated states) in state space l Each node implicitly represents – a partial solution path from the start node to the given node – cost of the partial solution path l From this node there may be – many possible paths that have this partial path as a prefix – many possible solutions 52 A State Space Graph a GOAL t b c e d f START h p r q u v What is the corresponding search tree? State Space Graph S start The size of a problem is usually described 5 2 4 in terms of the number A B C of possible states: 9 4 6 2 6 G 1 Tic-Tac-Toe: 39 states D E goal F Checkers: 1040 states 7 Rubik's Cube: 1019 states Chess: 10120 states H 54 State-Space Search Algorithm Node generalSearch (Problem problem, List nodesDS) – Problem: object that contains l start state, getStartState l operators/moves l operator costs l goal test, isGoal: tests if given node's state satisfies all goal conditions – List: data structure such as a stack, queue, or priority queue l maintains a "list" of unexpanded nodes that is initially empty l add: an unexpanded node to the "list" l remove: the next unexpanded node from the "list" l isEmpty: returns true if the "list" has no more nodes – Node: either a goal node or "failure" 55 Uninformed Search on Trees l Uninformed means we only know: – The goal test – The successors() function l But not which non-goal states are better l For now, also assume state space is a tree – That is, we won’t worry about repeated states – We will relax this later State-Space Search Algorithm //Note: this algorithm doesn't detect loops in the state space Node generalSearch (Problem problem, List frontier) { frontier.add(new Node(problem.getStartState())); while (true) { if (frontier.isEmpty()) return new Node("failure"); Node node = frontier.remove(); if (problem.isGoal(node.getState())) return node; frontier.add(expand node given problem operators); //expand: generates all of a node's children nodes //Note: the goal test is NOT done when nodes are generated } } 57 State-Space Search Algorithm 58 Key Issues of State-Space Search Algorithm l Search process constructs a "search tree" – root is the start state – leaf nodes are: l unexpanded nodes (in the Frontier list) l "dead ends" (nodes that aren't goals and have no successors because no operators were applicable) l goal node is last leaf node found l Loops in graph may cause "search tree" to be infinite even if state space is small l Changing the Frontier ordering leads to different search strategies 59 8-Puzzle State-Space Search Tree (Not all nodes shown; e.g., no “backwards” moves) Uninformed Search Strategies Uninformed Search: strategies that order nodes without using any domain specific information, i.e., don’t use any information stored in a state l BFS: breadth-first search – Queue (FIFO) used for the Frontier – remove from front, add to back l DFS: depth-first search – Stack (LIFO) used for the Frontier – remove from front, add to front 61 Breadth-First Search (BFS) Expand the shallowest node first: 1. Examine states one step away from the initial states 2. Examine states two steps away from the initial states 3. For implementation, fringe is a FIFO queue, i.e., new successors go 4. at end Goal Breadth-First Search (BFS) generalSearch(problem, queue) S # of nodes tested: 0, expanded: 0 start expnd. node Frontier list {S} 5 2 4 A B C 9 4 6 2 6 G 1 D E goal F 7 H 63 Breadth-First Search (BFS) generalSearch(problem, queue) S # of nodes tested: 1, expanded: 1 start expnd. node Frontier list {S} 5 2 4 S not goal {A,B,C} A B C 9 4 6 2 6 G 1 D E goal F 7 H 64 Breadth-First Search (BFS) generalSearch(problem, queue) S # of nodes tested: 2, expanded: 2 start expnd. node Frontier list {S} 5 2 4 S {A,B,C} A B C A not goal {B,C,D,E} 9 4 6 2 6 G 1 D E goal F 7 H 65 Breadth-First Search (BFS) generalSearch(problem, queue) S # of nodes tested: 3, expanded: 3 start expnd. node Frontier list {S} 5 2 4 S {A,B,C} A B C A {B,C,D,E} B not goal {C,D,E,G} 9 4 6 2 6 G 1 D E goal F 7 H 66 Breadth-First Search (BFS) generalSearch(problem, queue) S # of nodes tested: 4, expanded: 4 start expnd. node Frontier list {S} 5 2 4 S {A,B,C} A B C A {B,C,D,E} B {C,D,E,G} 9 4 6 2 C not goal {D,E,G,F} 6 G 1 D E goal F 7 H 67 Breadth-First Search (BFS) generalSearch(problem, queue) S # of nodes tested: 5, expanded: 5 start expnd. node Frontier list {S} 5 2 4 S {A,B,C} A B C A {B,C,D,E} B {C,D,E,G} 9 4 6 2 C {D,E,G,F} 6 G 1 D not goal {E,G,F,H} D E goal F 7 H 68 Breadth-First Search (BFS) generalSearch(problem, queue) S # of nodes tested: 6, expanded: 6 start expnd. node Frontier list {S} 5 2 4 S {A,B,C} A B C A {B,C,D,E} B {C,D,E,G} 9 4 6 2 C {D,E,G,F} 6 G 1 D {E,G,F,H} D E goal F E not goal {G,F,H,G} 7 H 69 Breadth-First Search (BFS) generalSearch(problem, queue) S # of nodes tested: 7, expanded: 6 start expnd. node Frontier list {S} 5 2 4 S {A,B,C} A B C A {B,C,D,E} B {C,D,E,G} 9 4 6 2 C {D,E,G,F} 6 G 1 D {E,G,F,H} D E goal F E {G,F,H,G} G goal {F,H,G} no expand 7 H 70 Breadth-First Search (BFS) generalSearch(problem, queue) S # of nodes tested: 7, expanded: 6 start expnd. node Frontier list {S} 5 2 4 S {A,B,C} A B C A {B,C,D,E} B {C,D,E,G} 9 4 6 2 C {D,E,G,F} 6 G 1 D {E,G,F,H} D E goal F E {G,F,H,G} G {F,H,G} 7 path: S,B,G H cost: 8 71 General State-Space Search Algorithm function general-search(problem, QUEUEING-FUNCTION) ;; problem describes the start state, actions, goal test, and ;; action costs ;; queueing-function is a comparator function that ranks two states ;; general-search returns either a goal node or "failure" frontier = MAKE-QUEUE(MAKE-NODE(problem.INITIAL-STATE)) loop if EMPTY(frontier) then return "failure" node = REMOVE-FRONT(frontier) if problem.GOAL-TEST(node.STATE) succeeds then return node frontier = QUEUEING-FUNCTION(frontier, EXPAND(node, problem.ACTIONS)) ;; successors(node) = EXPAND(node, ACTIONS) ;; Note: The goal test is NOT done when nodes are generated ;; Note: This algorithm does not detect loops end Evaluating Search Strategies l Completeness If a solution exists, will it be found? – a complete algorithm will find a solution (not all) l Optimality / Admissibility If a solution is found, is it guaranteed to be optimal? – an admissible algorithm will find a solution with minimum cost l Time Complexity How long does it take to find a solution? – usually measured for worst case – measured by counting number of nodes expanded l Space Complexity How much space is used by the algorithm? – measured by the max. Frontier size during search 73 What’s in the Frontier for BFS? l If goal is at depth d, how big is the Frontier (worst case)? Goal Properties of breadth-first search l Complete? Yes (if b is finite) l Time? 1+b+b2+b3 +… +bd + b(bd-1 ) = O(bd+1) l Space? O(bd+1) (keeps every node in memory) l Optimal? Yes (if cost = 1 per step) l Space is the bigger problem (more than time) Breadth-First Search (BFS) l Complete? – Yes l Optimal / Admissible? – Yes, if all operators (i.e., arcs) have the same constant cost, or costs are positive, non-decreasing with depth – otherwise, not optimal but does guarantee finding solution of shortest length (i.e., fewest arcs) l Time and space complexity: O(bd) (i.e., exponential) – d is the depth of the solution – b is the branching factor at each non-leaf node l Very slow to find solutions with a large number of steps because must look at all shorter length possibilities first 76 Breadth-First Search (BFS) l A complete search tree has a total # of nodes = 1 + b + b2 +... + bd = (b(d+1) - 1) / (b-1) – d: the tree's depth – b: the branching factor at each non-leaf node l For example: d = 12, b = 10 1 + 10 + 100 +... + 1012 = (1013 - 1)/9 = O(1012) – If BFS expands 1,000 nodes/sec and each node uses 100 bytes of storage, then BFS will take 35 years to run in the worst case, and it will use 111 terabytes of memory! 77 Breadth-First Search (BFS) l Memory requirements are a bigger problem than its execution time. l Exponential complexity search problems cannot be solved by uninformed search methods for any but the smallest instances. l Assuming a branching factor b = 10 and 1 million nodes/second;1000 bytes /node 78 Problem: Given State Space l Assume child nodes visited in increasing alphabetical order l BFS = ? Breadth-First Search l List sequence of squares “expanded” l Positions are pushed onto queue in clockwise order, so when removed they are “visited” in clockwise order l Assume “cycle checking” done so a node is not added if it occurs on path back to root in search tree Depth-First Search (DFS) Expand the deepest node first 1. Select a direction, go deep to the end 2. Slightly change the end 3. Slightly change the end some more… 4. Use a Stack to order nodes on the Frontier Goal Depth-First Search (DFS) generalSearch(problem, stack) S # of nodes tested: 0, expanded: 0 start expnd. node Frontier {S} 5 2 4 A B C 9 4 6 2 6 G 1 D E goal F 7 H 82 Depth-First Search (DFS) generalSearch(problem, stack) S # of nodes tested: 1, expanded: 1 start expnd. node Frontier {S} 5 2 4 S not goal {A,B,C} A B C 9 4 6 2 6 G 1 D E goal F 7 H 83 Depth-First Search (DFS) generalSearch(problem, stack) S # of nodes tested: 2, expanded: 2 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A B C A not goal {D,E,B,C} 9 4 6 2 6 G 1 D E goal F 7 H 84 Depth-First Search (DFS) generalSearch(problem, stack) S # of nodes tested: 3, expanded: 3 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A B C A {D,E,B,C} D not goal {H,E,B,C} 9 4 6 2 6 G 1 D E goal F 7 H 85 Depth-First Search (DFS) generalSearch(problem, stack) S # of nodes tested: 4, expanded: 4 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A B C A {D,E,B,C} D {H,E,B,C} 9 4 6 2 H not goal {E,B,C} 6 G 1 D E goal F 7 H 86 Depth-First Search (DFS) generalSearch(problem, stack) S # of nodes tested: 5, expanded: 5 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A B C A {D,E,B,C} D {H,E,B,C} 9 4 6 2 H {E,B,C} 6 G 1 E not goal {G,B,C} D E goal F 7 H 87 Depth-First Search (DFS) generalSearch(problem, stack) S # of nodes tested: 6, expanded: 5 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A B C A {D,E,B,C} D {H,E,B,C} 9 4 6 2 H {E,B,C} 6 G 1 E {G,B,C} D E goal F G goal {B,C} no expand 7 H 88 Depth-First Search (DFS) generalSearch(problem, stack) S # of nodes tested: 6, expanded: 5 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A B C A {D,E,B,C} D {H,E,B,C} 9 4 6 2 H {E,B,C} 6 G 1 E {G,B,C} D E goal F G {B,C} 7 path: S,A,E,G H cost: 15 89 Problem: Given State Space l Assume child nodes visited in increasing alphabetical order l Do Cycle Checking: Don’t add node to Frontier if its state already occurs on path back to root l DFS = ? Depth-First Search (DFS) l Complete? No: fails in infinite-depth spaces, spaces with loops Ø Modify to avoid repeated states along path complete in finite spaces l Time? O(bm) : terrible if m is much larger than d Ø but if solutions are dense, may be much faster than breadth-first l Space? O(bm), i.e., linear space! l Optimal? No 91 Depth-First Search (DFS) l May not terminate without a depth bound i.e., cutting off search below a fixed depth, D l Not complete – with or without cycle detection – and, with or without a depth cutoff l Not optimal / admissible l Can find long solutions quickly if lucky 92 Depth-First Search (DFS) l Time complexity: O(bd) exponential Space complexity: O(bd) linear – d is the depth of the solution – b is the branching factor at each non-leaf node l Performs “chronological backtracking” – i.e., when search hits a dead end, backs up one level at a time – problematic if the mistake occurs because of a bad action choice near the top of search tree 93 Uniform-Cost Search (UCS) l Use a “Priority Queue” to order nodes on the Frontier list, sorted by path cost l Let g(n) = cost of path from start node s to current node n l Sort nodes by increasing value of g 94 Uniform-Cost Search (UCS) generalSearch(problem, priorityQueue) S # of nodes tested: 0, expanded: 0 start expnd. node Frontier list {S} 5 2 4 A B C 9 4 6 2 6 G 1 D E goal F 7 H 95 Uniform-Cost Search (UCS) generalSearch(problem, priorityQueue) S # of nodes tested: 1, expanded: 1 start expnd. node Frontier list {S:0} 5 2 4 S not goal {B:2,C:4,A:5} A B C 9 4 6 2 6 G 1 D E goal F 7 H 96 Uniform-Cost Search (UCS) generalSearch(problem, priorityQueue) S # of nodes tested: 2, expanded: 2 start expnd. node Frontier list {S} 5 2 4 S {B:2,C:4,A:5} A B C B not goal {C:4,A:5,G:2+6} 9 4 6 2 6 G 1 D E goal F 7 H 97 Uniform-Cost Search (UCS) generalSearch(problem, priorityQueue) S # of nodes tested: 3, expanded: 3 start expnd. node Frontier list {S} 5 2 4 S {B:2,C:4,A:5} A B C B {C:4,A:5,G:8} C not goal {A:5,F:4+2,G:8} 9 4 6 2 6 G 1 D E goal F 7 H 98 Uniform-Cost Search (UCS) generalSearch(problem, priorityQueue) S # of nodes tested: 4, expanded: 4 start expnd. node Frontier list {S} 5 2 4 S {B:2,C:4,A:5} A B C B {C:4,A:5,G:8} C {A:5,F:6,G:8} 9 4 6 2 A not goal {F:6,G:8,E:5+4, D:5+9} 6 G 1 D E goal F 7 H 99 Uniform-Cost Search (UCS) generalSearch(problem, priorityQueue) S # of nodes tested: 5, expanded: 5 start expnd. node Frontier list {S} 5 2 4 S {B:2,C:4,A:5} A B C B {C:4,A:5,G:8} C {A:5,F:6,G:8} 9 4 6 2 A {F:6,G:8,E:9,D:14} 6 G 1 F not goal {G:4+2+1,G:8,E:9, D E goal F D:14} 7 10 H 0 Uniform-Cost Search (UCS) generalSearch(problem, priorityQueue) S # of nodes tested: 6, expanded: 5 start expnd. node Frontier list {S} 5 2 4 S {B:2,C:4,A:5} A B C B {C:4,A:5,G:8} C {A:5,F:6,G:8} 9 4 6 2 A {F:6,G:8,E:9,D:14} 6 G 1 F {G:7,G:8,E:9,D:14} D E goal F G goal {G:8,E:9,D:14} no expand 7 10 H 1 Uniform-Cost Search (UCS) generalSearch(problem, priorityQueue) S # of nodes tested: 6, expanded: 5 start expnd. node Frontier list {S} 5 2 4 S {B:2,C:4,A:5} A B C B {C:4,A:5,G:8} C {A:5,F:6,G:8} 9 4 6 2 A {F:6,G:8,E:9,D:14} 6 G 1 F {G:7,G:8,E:9,D:14} D E goal F G {G:8,E:9,D:14} 7 path: S,C,F,G 10 H cost: 7 2 Problem: Given State Space l Assume child nodes visited in increasing alphabetical order l UCS = ? Uniform-Cost Search (UCS) l Called Dijkstra's Algorithm in the literature l Similar to Branch and Bound Algorithm l Complete: Yes, if step cost ≥ ε l Optimal / Admissible – requires that the goal test is done when a node is removed from the Frontier rather than when the node is generated by its parent node l Time/Space Complexity? # of nodes with g ≤ cost of optimal solution, – O(bceiling(C*/ ε)) where C * is the cost of the optimal solution (i.e., exponential) 10 4 Iterative-Deepening Search (IDS) l requires modification to DFS search algorithm: – do DFS to depth 1 and treat all children of the start node as leaves – if no solution found, do DFS to depth 2 – repeat by increasing “depth bound” until a solution found l Start node is at depth 0 10 5 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 1, # of nodes expanded: 0, tested: 0 start expnd. node Frontier {S} 5 2 4 A B C 9 4 6 2 6 G 1 D E goal F 7 10 H 6 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 1, # of nodes tested: 1, expanded: 1 start expnd. node Frontier {S} 5 2 4 S not goal {A,B,C} A B C 9 4 6 2 6 G 1 D E goal F 7 10 H 7 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 1, # of nodes tested: 2, expanded: 1 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A not goal {B,C} no expand A B C 9 4 6 2 6 G 1 D E goal F 7 10 H 8 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 1, # of nodes tested: 3, expanded: 1 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A {B,C} A B C B not goal {C} no expand 9 4 6 2 6 G 1 D E goal F 7 10 H 9 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 1, # of nodes tested: 4, expanded: 1 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A {B,C} A B C B {C} C not goal { } no expand-FAIL 9 4 6 2 6 G 1 D E goal F 7 11 H 0 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 2, # of nodes tested: 4(1), expanded: 2 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A {B,C} A B C B {C} 9 4 6 2 C {} S no test {A,B,C} 6 G 1 D E goal F 7 11 H 1 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 2, # of nodes tested: 4(2), expanded: 3 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A {B,C} A B C B {C} 9 4 6 2 C {} S {A,B,C} 6 G 1 D E goal F A no test {D,E,B,C} 7 11 H 2 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 2, # of nodes tested: 5(2), expanded: 3 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A {B,C} A B C B {C} 9 4 6 2 C {} S {A,B,C} 6 G 1 D E goal F A {D,E,B,C} D not goal {E,B,C} no expand 7 11 H 3 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 2, # of nodes tested: 6(2), expanded: 3 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A {B,C} A B C B {C} 9 4 6 2 C {} S {A,B,C} 6 G 1 D E goal F A {D,E,B,C} D {E,B,C} 7 E not goal {B,C} no expand 11 H 4 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 2, # of nodes tested: 6(3), expanded: 4 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A {B,C} A B C B {C} 9 4 6 2 C {} S {A,B,C} 6 G 1 D E goal F A {D,E,B,C} D {E,B,C} 7 E {B,C} 11 B no test {G,C} H 5 Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 2, # of nodes tested: 7(3), expanded: 4 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A {B,C} A B C B {C} 9 4 6 2 C {} S {A,B,C} 6 G 1 D E goal F A {D,E,B,C} D {E,B,C} 7 E {B,C} 11 B {G,C} H 6 G goal {C} no expand Iterative-Deepening Search (IDS) deepeningSearch(problem) S depth: 2, # of nodes tested: 7(3), expanded: 4 start expnd. node Frontier {S} 5 2 4 S {A,B,C} A {B,C} A B C B {C} 9 4 6 2 C {} S {A,B,C} 6 G 1 D E goal F A {D,E,B,C} D {E,B,C} 7 E {B,C} path: S,B,G 11 B {G,C} H cost: 8 7 G {C} Iterative-Deepening Search (IDS) l Has advantages of BFS – completeness – optimality as stated for BFS l Has advantages of DFS – limited space – in practice, even with redundant effort it still finds longer paths more quickly than BFS 11 8 Iterative-Deepening Search (IDS) l Space complexity: O(bd) (i.e., linear like DFS) l Time complexity is a little worse than BFS or DFS – because nodes near the top of the search tree are generated multiple times (redundant effort) l Worst case time complexity: O(bd) exponential – because most nodes are near the bottom of tree 11 9 Iterative-Deepening Search (IDS) How much redundant effort is done? l The number of times the nodes are generated: 1bd + 2b(d-1) +... + db ≤ bd / (1 – 1/b)2 = O(bd) – d: the solution's depth – b: the branching factor at each non-leaf node l For example: b = 4 4d / (1 – ¼)2 = 4d / (.75)2 = 1.78 × 4d – in the worst case, 78% more nodes are searched (redundant effort) than exist at depth d – as b increases, this % decreases 12 0 Iterative-Deepening Search l Trades a little time for a huge reduction in space – lets you do breadth-first search with (more space efficient) depth-first search l “Anytime” algorithm: good for response-time critical applications, e.g., games l An “anytime” algorithm is an algorithm that can return a valid solution to a problem even if it's interrupted at any time before it ends. The algorithm is expected to find better and better solutions the longer it runs. Bidirectional Search l Breadth-first search from both start and goal l Stop when Frontiers meet l Generates O(bd/2) instead of O(bd) nodes start goal Which Direction Should We Search? Our choices: Forward, backwards, or bidirectional The issues: How many start and goal states are there? Branching factors in each direction How much work is it to compare states? 12 3 Example for Bidirectional Search? In the below search tree, bidirectional search algorithm is applied. This algorithm divides one graph/tree into two sub- graphs. It starts traversing from node 1 in the forward direction and starts from goal node 16 in the backward direction. The algorithm terminates at node 9 where two searches meet. 12 4 Performance of Search Algorithms on Trees b: branching factor (assume finite) d: goal depth m: graph depth Complete optimal time space Breadth-first Y Y, if 1 O(bd) O(bd) search Uniform-cost Y Y O(bC*/ε) O(bC*/ε) search2 Depth-first N N O(bm) O(bm) search Iterative Y Y, if 1 O(bd) O(bd) deepening Bidirectional Y Y, if 1 O(b d/2) O(bd/2) search3 1. edge cost constant, or positive non-decreasing in depth 2. edge costs ε, ε > 0. C* is the best goal path cost 3. both directions BFS; not always feasible If State Space is Not a Tree l The problem: repeated states D, CFS DFS, C CSDF, CD,SF CDF, S S, CFD SF, CD , CSDF C, DSF CSF, D l Ignoring repeated states: wasteful (BFS) or impossible (DFS). Why? l How to prevent these problems? If State Space is Not a Tree l We have to remember already-expanded states (called Explored (aka Closed) set) too l When we pick a node from Frontier – Remove it from Frontier – Add it to Explored – Expand node, generating all successors – For each successor, child, l If child is in Explored or in Frontier, throw child away // for BFS and DFS l Otherwise, add it to Frontier l Called Graph-Search algorithm in Figure 3.7 and Uniform-Cost-Search in Figure 3.14 function Uniform-Cost-Search (problem) loop do if Empty?(frontier) then return failure node = Pop(frontier) if Goal?(node) then return Solution(node) Insert node in explored foreach child of node do if child not in frontier or explored then Insert child in frontier else if child in frontier with higher cost then Remove that old node from frontier Insert child in frontier This is the algorithm in Figure 3.14 in the textbook; note that if child is not in frontier but is in explored, this algorithm will throw away child If State Space is Not a Tree l BFS: – Still O(bd) space complexity, not worse l DFS: – Known as Memorizing DFS (MEMDFS) l Space and time now O(min(N, bM)) – much worse! l N: number of states in problem l M: length of longest cycle-free path from start to anywhere – Alternative: Path-Check DFS (PCDFS): remember only expanded states on current path (from start to the current node) l Space O(M) l Time O(bM) Example S 1 5 8 A B C 3 7 9 4 5 How are nodes expanded by D E G Depth First Search Breadth First Search Uniform Cost Search Iterative Deepening Are the solutions the same? Nodes Expanded by: l Depth-First Search: S A D E G Solution found: S A G l Breadth-First Search: S A B C D E G Solution found: S A G l Uniform-Cost Search: S A D B C E G Solution found: S B G l Iterative-Deepening Search: S A B C S A D E G Solution found: S A G Uniform-Cost Search General Search with Open and Closed //Note: this algorithm does detect loops in the state space Node generalSearch (Problem problem, List OPEN) { OPEN.add(new Node(problem.getStartState())); List CLOSED = new List(); //initially empty, just a List DS while (true) { if (OPEN.isEmpty()) return new Node("failure"); Node node = OPEN.remove(); //removes front node if (problem.isGoal(node.getState())) return node; CLOSED.add(node); //remember it was expanded OPEN.add(expand node given problem operators); //expand: only add successors not already in OPEN or CLOSED } } 13 3