🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Course-Lecture2 - Uninformed Search.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

3 Plan ๏ Agents to solve problems Types of problem Problem formulation ๏ Sample search problems ๏ Problem solving by searching Breadth-First Search Depth-First Search Uniform-Cost Search Agents to Solve Problems 5 Problem-Solving Agents...

3 Plan ๏ Agents to solve problems Types of problem Problem formulation ๏ Sample search problems ๏ Problem solving by searching Breadth-First Search Depth-First Search Uniform-Cost Search Agents to Solve Problems 5 Problem-Solving Agents ๏ Problem-Solving Agent is a Goal-Based Agent that decides what to do by finding sequences of actions that lead to desirable states. ๏ The computational process it undertakes is called search. ๏ Problem-solving agents use atomic representations: states of the world are considered as wholes Planning agents use factored or structured representations of states ๏ Informed v.s. Uninformed algorithm: whether the agent can estimate how far it is from the goal 6 Example: Travel in Romania ๏ On vacation in Romania; currently in Arad. ๏ Formulate the goal: Be in Bucharest ๏ Formulate the problem: States: be in different cities Actions: driving between cities ๏ Find a solution: Sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest 7 Problem-Solving Agents 8 Problem formulation A search problem is defined by: 1. State space A set of possible states the environment can be in. 2. Initial state Where the agent starts in 3. Goal test Usually a condition, sometimes the description of a state 4. Successor function Include action and cost: describe how state changes with an action, and the cost of applying action a to transit from state s to s’ A solution is a sequence of actions (a plan) which transforms the start state to a goal state 9 What is in a State Space? 10 Abstraction ๏ Our formulation of the problem (e.g., getting to Bucharest) is a model—an abstract mathematical description—and not the real thing. ๏ The process of removing detail from a representation is called abstraction. Seven Bridges of Königsberg Go around the city taking each bridge once and only once? In 1735, Euler showed that this is impossible using graph theory and topology. Solution in graph theory: for this to be possible, each node must have an even degree. 11 Example: Route finding Problem: nd path from Arad to Bucharest fi 12 Example: Route finding Actions 13 Example: Route finding ๏ Formulate goal: Be in Bucharest ๏ Formulate problem: States: various cities Actions: go to adjacent city Goal test: whether in Bucharest Path cost: distance ๏ Find solution: Sequence of cities e.g., Arad, Sibiu, Fagaras, Bucharest Formulate tasks as searching Map? Where is the State Space Graphs and Search Trees 16 State space graph ๏ State space graph: A mathematical representation of a search problem Nodes are (abstracted) world configurations Arcs represent successors (action results) The goal test is a set of goal nodes (maybe only one) ๏ In a state space graph, each state occurs only once! ๏ We can rarely build this full graph in memory (it’s too big), but it’s a useful idea 17 State space graph Tiny state space graph for a tiny search problem State space graph for Pacman: eat all the dots 18 Search tree ๏ A search tree: A “what if” tree of plans and their outcomes The start state is the root node Children correspond to successors Nodes show states, but correspond to PLANS that achieve those states For most problems, we can never actually build the whole tree 19 State State Spacevs. Space Graphs Graphs vs. Search Search Trees Trees Each NODE in in State Space Graph the search tree is Search Tree an entire PATH in the state space S G graph. e p a d b c b c e h r q e d f a a h r p q f S h We construct both on demand – and p q f q c G p q r we construct as q c a G little as possible. a 20 Quiz: State State Space Space Graphs Graphs vs. Search vs. Trees Search Trees Consider this 4-state graph: How big is its search tree (from S)? a S b G ? 21 Quiz: State State Space Space Graphs Graphs vs. Search vs. Trees Search Trees Consider this 4-state graph: How big is its search tree (from S)? a S G b 22 Quiz: State State Space Space Graphs Graphs vs. Search vs. Trees Search Trees Consider this 4-state graph: How big is its search tree (from S)? a s a b S G b G a G b a G b G … … Important: Lots of repeated structure in the search tree! Sample Search Problems 24 Example: Vacuum World ๏ Formulate problem: States: ? Actions: ? Goal test: ? Path cost: ? 25 Example: Vacuum World ๏ Formulate problem: States: dirt and robot location Actions: left (L), right (R), suck (S) Goal test: no dirt at all locations Path cost: 1 per action 26 Example: The 8-puzzle ๏ Formulate problem: States: location of tiles Actions: move blank left, right, up, down Goal test: given goal state Path cost: 1 per move 27 Example: The 8-puzzle 9! / 2 28 Example: Robotic Assembly ๏ Formulate problem: States: real-valued coordinates of robot joint angles parts of the object to be assembled Actions: continuous motions of robot joints Goal test: complete assembly Path cost: time to execute 29 Example: Robotic Assembly 30 Example: The 8-Queens ๏ Formulate problem: States: ? Actions: ? Goal test: ? ๏ Path cost: ? Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. 31 Example: The 8-Queens ๏ Formulation 1: States: Any arrangement of 0 to 8 queens on the board. Actions: Add a queen in any square. Goal test: 8 queens on the board, none attacked. Path cost: 1 per move. ! 648 states with 8 queens Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. 32 Example: The 8-Queens ๏ Formulation 2: States: Any arrangement of k = 0 to 8 queens in the k leftmost columns with none attacked. Actions: Add a queen to any square in the leftmost empty column such that it is not attacked by any other queen. Goal test: 8 queens on the board, none attacked Place 8 queens in a chessboard so that Path cost: 1 per move no two queens are in the same row, column, or diagonal. ! 2067 states 33 Exercise: River Crossing 34 Answer: River Crossing ๏ State space S: all valid configurations Initial state: I = {(CSDF,)} Goal state: G = {(,CSDF)} Actions: states reachable in one step from S Succs((CSDF,)) = {(CD, SF)} Succs((CD, SF)) = {(CD, FS), (D, CFS), (C, DFS)} Path cost: 1 for each transition 35 Answer: River Crossing A directed graph in state space 36 Other example problems ๏ Travelling Salesperson Problem (TSP) Each city must be visited exactly once – find the shortest tour ๏ VLSI (Very Large Scale Integration) Layout Given schematic diagram comprising components (chips, resistors, capacitors, etc) and interconnections (wires), find optimal way to place components on a printed circuit board, under the constraint that only a small number of wire layers are available (and wires on a given layer cannot cross !) ๏ Robot navigation A generalization fo the route-finding problem. Problem Solving by Searching 38 Searching as problem solving technique Problem solving by searching: ๏ Searching is the process of looking for the solution of a problem through a set of possibilities (state space). ๏ Search conditions include : Current state - where one is; Goal state - the solution reached; check whether it has been reached; Cost of obtaining the solution. ๏ The Solution is a path from the current state to the goal state. 39 Searching as problem solving technique Process of Searching: ๏ Searching proceeds as follows: Check the current state; Execute allowable actions to move to the next state; Check if the new state is the solution state; if it is not, then the new state becomes the current state and the process is repeated until a solution is found or the state space is exhausted. 40 Explore the search tree to find solution(s) Assumptions in basic search: ๏ The environment is Static ๏ The environment is Discretizable ๏ The environment is Observable ๏ The actions are Deterministic 41 General tree search ๏ Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states) ๏ Main question: which fringe (or frontier) nodes to explore? 42 Example: Route finding Problem: nd path from Arad to Bucharest fi 43 Example: Route finding 44 Example: Route finding 45 Example: Route finding 46 Implementation: general tree search 47 State vs. Node ๏ A state is a representation of a physical configuration ๏ A node is a data structure constituting part of a search tree, includes state, parent node, action, path cost g(x), depth ๏ The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states. 48 Fringe (Frontier) ๏ If you're performing a tree (or graph) search, then the set of all nodes at the end of all visited paths is called the fringe, frontier or border. ๏ Implemented as a queue FRINGE. INSERT(node,FRINGE) REMOVE(FRINGE) ๏ The ordering of the nodes in FRINGE defines the search strategy. Fringe 49 Fringe (Frontier) 50 Search strategies ๏ A search strategy is defined by picking the order of node expansion ๏ Strategies are evaluated along the following dimensions: Completeness: Is the algorithm guaranteed to find a solution when there is one, and to correctly report failure when there is not? Cost optimality: Does it find a solution with the lowest path cost of all solutions? Time complexity: How long does it take to find a solution? This can be measured in seconds, or more abstractly by the number of states and actions considered. Space complexity: How much memory is needed to perform the search? ๏ Time and space complexity are measured in terms of b: maximum branching factor of the search tree d: depth of the least-cost solution m: maximum depth of the state space (may be ∞) 51 Uninformed vs. Informed Strategies ๏ Uninformed (or blind) strategies have no clue about how close a state is to the goal(s). ๏ Informed (or heuristic) strategies exploits such information to assess that one node is “more promising” than another. 52 Uninformed Strategies Uninformed search strategies use only the information available in the problem definition. ๏ Breadth-First Search ๏ Depth-First Search ๏ Uniform-Cost Search ๏ Depth-Limited Search ๏ Iterative Deepening Search Breadth-First Search 54 Breadth-First Search (BFS) ๏ BFS first visit all the nodes at the same depth first and then proceed visiting nodes at a deeper depth (strategy: expand a shallowest node first) 55 Breadth-First Search ๏ A node can be reached from different nodes using different paths ๏ But we need to visit each node only once. ๏ So, we mark each node differently into 3 categories - unvisited, discovered and complete. Initially, all nodes are unvisited. After visiting a node for the first time, it becomes discovered. A node is complete if all of its adjacent nodes have been visited. Thus, all the adjacent nodes of a complete node are either discovered or complete. https://www.codesdope.com/course/algorithms-bfs/ 56 Breadth-First Search ๏ Thus after visiting a node, we first visit all its sibling and then their children. ๏ We can use a first in first out (FIFO) queue to achieve this (Fringe is a FIFO queue). ๏ Starting from the source, we can put all its adjacent nodes in a queue 57 Breadth-First Search 58 Breadth-First Search (BFS) Properties ๏ Is it complete? Yes. d must be finite if a solution exists. ๏ Is it optimal? Only if costs are all 1. d ๏ Search time? d shallowest solution ๏ Fringe space? Has roughly the last tier, so Depth-First Search 60 Depth-First Search (DFS) ๏ DFS first explores the depth of the graph before the breadth i.e., it traverses along the increasing depth and upon reaching the end, it backtracks to the node from which it was started and then do the same with the sibling node. 61 Depth-First Search ๏ Similar to the BFS, we also mark the vertices white, gray and black to represent unvisited, discovered and complete respectively. ๏ Fringe = LIFO queue, i.e., a stack that put successors at front. https://www.codesdope.com/course/algorithms-dfs/ 62 Depth-First Search 63 Depth-First Search (DFS) Properties ๏ Is it complete? m could be infinite, so only if we prevent cycles ๏ Is it optimal? No, it finds the “leftmost” solution, regardless of depth or cost ๏ Search time? O(bm) Terrible if m is much larger than d, but if solutions are dense, may be much faster than breadth-first. ๏ Fringe space? Only has siblings on path to root, so O(bm), i.e., linear space! 64 Exercise: DFS vs. BFS ๏ When will BFS outperform DFS? ๏ When will DFS outperform BFS? 65 Depth-Limited Search ๏ = depth-first search with depth limit L, i.e., nodes at depth L have no successors 66 Iterative Deepening Search ๏ Idea: get DFS’s space advantage with BFS’s time / shallow-solution advantages Run a DFS with depth limit 1. If no solution... Run a DFS with depth limit 2. If no solution... Run a DFS with depth limit 3...... 67 Cost-Sensitive Cost-Sensitive Search Search a GOAL 2 2 b c 3 2 1 8 2 e 3 d f 9 8 2 START h 1 4 2 p 4 r 15 q BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path. Uniform-Cost Search 69 Uniform Uniform-Cost Search Cost Search (Dijkstra’s algorithm) 2 a G Strategy: expand a b c cheapest node first: 1 8 2 2 e 3 d f Fringe is a priority queue 9 2 S h 8 1 (priority: cumulative cost) 1 p q r 15 S 0 d 3 e 9 p 1 b 4 c e 5 h 17 r 11 q 16 11 Cost a 6 a h 13 r 7 p q f contours p q f 8 q c G Equivalent to breadth- rst q 11 c G 10 a if step costs all equal. a fi 70 Properties of Uniform-Cost Search ๏ Complete? Yes, if step cost ≥ ε > 0 (ε is the lower bound of each action), it will never getting caught going down a single infinite path ๏ Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of the optimal solution. ๏ Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)). ๏ Optimal? Yes – nodes expanded in increasing order of g(n). ๏ 71 Uniform-Cost Search Issues ๏ Remember: UCS explores increasing cost contours ๏ The good: UCS is complete and optimal! ๏ The bad: Explores options in every “direction” No information about goal location ๏ We’ll fix that soon! 72 Summary of Algorithms 73 Summary ๏ All these search algorithms are the same except for fringe strategies Conceptually, all fringes are priority queues (i.e. collections of nodes with attached priorities) Can even code one implementation that takes a variable queuing object

Use Quizgecko on...
Browser
Browser