Artificial Intelligence Search Agents PDF

Document Details

UnderstandableCarnelian6214

Uploaded by UnderstandableCarnelian6214

Tags

artificial intelligence search agents problem solving AI

Summary

These lecture notes provide an introduction to search agents in artificial intelligence. The document covers the concepts of goal-based agents and explores different examples, such as the 8-queens problem, and the 8-puzzle problem. The material also introduces problem formulation and search strategies.

Full Transcript

Artificial Intelligence Search Agents Goal-based agents Reflex agents: use a mapping from states to actions. Goal-based agents: problem solving agents or planning agents. Goal-based agents Agents that work towards a goal. Agents consider the impact of actions on future states....

Artificial Intelligence Search Agents Goal-based agents Reflex agents: use a mapping from states to actions. Goal-based agents: problem solving agents or planning agents. Goal-based agents Agents that work towards a goal. Agents consider the impact of actions on future states. Agent’s job is to identify the action or series of actions that lead to the goal. Goal-based agents Agents that work towards a goal. Agents consider the impact of actions on future states. Agent’s job is to identify the action or series of actions that lead to the goal. Formalized as a search through possible solutions. Examples Examples EXPLORE! Examples The 8-queen problem: on a chess board, place 8 queens so that no queen is attacking any other horizontally, vertically or diagonally. Examples Number of possible sequences to investigate: 64 ∗ 63 ∗ 62 ∗... ∗ 57 = 1.8 × 1014 Problem solving as search 1. Define the problem through: (a) Goal formulation (b) Problem formulation 2. Solving the problem as a 2-stage process: (a) Search: “mental” or “offline” exploration of several possi- bilities (b) Execute the solution found Problem formulation Initial state: the state in which the agent starts Problem formulation Initial state: the state in which the agent starts States: All states reachable from the initial state by any se- quence of actions (State space) Problem formulation Initial state: the state in which the agent starts States: All states reachable from the initial state by any se- quence of actions (State space) Actions: possible actions available to the agent. At a state s, Actions(s) returns the set of actions that can be executed in state s. (Action space) Problem formulation Initial state: the state in which the agent starts States: All states reachable from the initial state by any se- quence of actions (State space) Actions: possible actions available to the agent. At a state s, Actions(s) returns the set of actions that can be executed in state s. (Action space) Transition model: A description of what each action does Results(s, a) Problem formulation Initial state: the state in which the agent starts States: All states reachable from the initial state by any se- quence of actions (State space) Actions: possible actions available to the agent. At a state s, Actions(s) returns the set of actions that can be executed in state s. (Action space) Transition model: A description of what each action does Results(s, a) Goal test: determines if a given state is a goal state Problem formulation Initial state: the state in which the agent starts States: All states reachable from the initial state by any se- quence of actions (State space) Actions: possible actions available to the agent. At a state s, Actions(s) returns the set of actions that can be executed in state s. (Action space) Transition model: A description of what each action does Results(s, a) Goal test: determines if a given state is a goal state Path cost: function that assigns a numeric cost to a path w.r.t. performance measure Examples States: all arrangements of 0 to 8 queens on the board. Examples States: all arrangements of 0 to 8 queens on the board. Initial state: No queen on the board Examples States: all arrangements of 0 to 8 queens on the board. Initial state: No queen on the board Actions: Add a queen to any empty square Examples States: all arrangements of 0 to 8 queens on the board. Initial state: No queen on the board Actions: Add a queen to any empty square Transition model: updated board Examples States: all arrangements of 0 to 8 queens on the board. Initial state: No queen on the board Actions: Add a queen to any empty square Transition model: updated board Goal test: 8 queens on the board with none attacked Examples 8 puzzles Examples Examples Start State Goal State Examples States: Location of each of the 8 tiles in the 3x3 grid Initial state: Any state Actions: Move Left, Right, Up or Down Transition model: Given a state and an action, returns re- sulting state Goal test: state matches the goal state? Path cost: total moves, each move costs 1. Search Problems Search Problems o A search problem consists of: o A state space Search Problems o A search problem consists of: A successor function (with actions, costs) “N”, 1.0 “E”, 1.0 Search Problems o A search problem consists of: a start state A goal test Sequence of actions Solution o A solution is a sequence of actions (a plan) which transforms the start state to a goal state Search Problems Are Models Example: Traveling in Romania o State space: o Cities o Successor function: o Roads: Go to adjacent city with cost = distance o Start state: o Arad o Goal test: o Is state == Bucharest? o Solution? What’s in a State Space? The world state includes every last detail of the environment A search state keeps only the details needed for planning (abstraction) o Problem: Pathing o Problem: Eat-All-Dots o States: (x,y) location o States: {(x,y), dot booleans} o Actions: NSEW o Actions: NSEW o Successor: update location o Successor: update location only and possibly a dot boolean o Goal test: is (x,y)=END o Goal test: dots all false State Space Graphs and Search Trees State Space Graphs o State space graph: A mathematical representation of a search problem o Nodes are (abstracted) world configurations o Arcs represent successors (action results) o The goal test is a set of goal nodes (maybe only one) o In a state space graph, each state occurs only once! o We can rarely build this full graph in memory (it’s too big), but it’s a useful idea State Space Graphs o State space graph: A mathematical representation of a search problem a G o Nodes are (abstracted) world configurations b c o Arcs represent successors (action results) e o The goal test is a set of goal nodes (maybe only d f one) S h p r o In a search graph, each state occurs only q once! Tiny search graph for a tiny search problem o We can rarely build this full graph in memory (it’s too big), but it’s a useful idea Search Trees o A search tree: o A “what if” tree of plans and their outcomes o The start state is the root node o Children correspond to successors o Nodes show states, but correspond to PLANS that achieve those states o For most problems, we can never actually build the whole tree Search Trees This is now / start “N”, 1.0 “E”, 1.0 Possible futures State Space Graphs vs. Search Trees Search Tree S Each NODE in in State Space Graph the search tree is an entire PATH in the state space d e G graph. p a c e h r b e c d b q f r p q S We construct both h f h p r on demand – and a f q a we construct as p q q c G little as possible. c G q a a Search Example: Romania Searching with a Search Tree o Search: o Expand out potential plans (tree nodes) o Maintain a fringe of partial plans under consideration o Try to expand as few tree nodes as possible Search Tree Example: Fragment of 8-Puzzle Problem Space © Daniel S. Weld 26 General Tree Search o Important ideas: o Fringe o Expansion o Exploration strategy Review Search 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Search Problems agent entity that perceives its environment and acts upon that environment state a configuration of the agent and its environment 2 4 5 7 12 9 4 2 15 4 10 3 8 3 1 11 8 7 3 14 13 1 11 12 14 6 10 1 6 11 9 5 14 7 9 13 15 12 5 13 10 15 6 8 2 initial state the state in which the agent begins initial state 2 4 5 7 8 3 1 11 14 6 10 9 13 15 12 actions choices that can be made in a state actions ACTIONS(s) returns the set of actions that can be executed in state s 1 2 actions 3 4 transition model a description of what state results from performing any applicable action in any state transition model RESULT(s, a) returns the state resulting from performing action a in state s 2 4 5 7 2 4 5 7 8 3 1 11 8 3 1 11 RESULT( , )= 14 6 10 12 14 6 10 12 9 13 15 9 13 15 2 4 5 7 2 4 5 7 8 3 1 11 8 3 1 11 RESULT( , )= 14 6 10 12 14 6 10 9 13 15 9 13 15 12 transition model 2 4 5 7 2 4 5 7 8 3 1 11 8 3 1 11 RESULT( , )= 14 6 10 12 14 6 10 9 13 15 9 13 15 12 state space the set of all states reachable from the initial state by any sequence of actions 2 4 5 7 8 3 1 11 14 6 10 12 2 4 5 7 9 13 15 2 4 5 7 8 3 1 11 8 3 1 11 14 6 10 12 14 6 10 9 13 15 9 13 15 12 2 4 5 7 2 4 5 7 2 4 5 7 2 4 5 7 8 3 1 11 8 3 1 11 8 3 1 11 8 3 1 14 6 10 12 14 6 12 14 6 10 14 6 10 11 9 13 15 9 13 10 15 9 13 15 12 9 13 15 12 goal test way to determine whether a given state is a goal state path cost numerical cost associated with a given path A B C D E F G I K H J L M A 4 2 B 5 2 C 1 D 6 E F G 3 2 3 I 4 K 3 H 4 J 2 1 2 L M A 1 1 B 1 1 C 1 D 1 E F G 1 1 1 I 1 K 1 H 1 J 1 1 1 L M Search Problems initial state actions transition model goal test path cost function solution a sequence of actions that leads from the initial state to a goal state optimal solution a solution that has the lowest path cost among all solutions node a data structure that keeps track of - a state - a parent (node that generated this node) - an action (action applied to parent to get node) - a path cost (from initial state to node) Approach Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. If node contains goal state, return the solution. Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier A C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier B C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier C D C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier D C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier E D C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier D C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. Find a path from A to E. A B Frontier D C D Start with a frontier that contains the initial state. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. E If node contains goal state, return the solution. F Expand node, add resulting nodes to the frontier. What could go wrong? Find a path from A to E. A B Frontier C D E F Find a path from A to E. A B Frontier A C D E F Find a path from A to E. A B Frontier C D E F Find a path from A to E. A B Frontier B C D E F Find a path from A to E. A B Frontier C D E F Find a path from A to E. A B Frontier A C D C D E F Find a path from A to E. A B Frontier C D C D E F Revised Approach Start with a frontier that contains the initial state. Start with an empty explored set. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. If node contains goal state, return the solution. Add the node to the explored set. Expand node, add resulting nodes to the frontier if they aren't already in the frontier or the explored set. Revised Approach Start with a frontier that contains the initial state. Start with an empty explored set. Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. If node contains goal state, return the solution. Add the node to the explored set. Expand node, add resulting nodes to the frontier if they aren't already in the frontier or the explored set. Search in More details Faculty of computer and information sciences Mansoura University Information System Department Outlines  Problem-solving Agents — Well-defined Problems and Solutions — Problem Types  Example Problems — Toy Problems — Real-world Problems  Searching for Solutions — Tree Search Algorithms  Uninformed Search Strategies  Avoiding repeated states Faculty of computer and information sciences Mansoura University Information System Department Problem-solving Agents  A problem solving agent: — A kind of goal-based agents — Decides what to do by finding sequence of actions that lead to desirable states.  Goal Formulation: It is the first step in problem solving. It is based on the current situation and the agent's performance measure.  Problem Formulation: It is the process of deciding what actions and states to consider, given a goal. Faculty of computer and information sciences Mansoura University Information System Department Problem-solving Agents (cont’d)  An agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to states of known values, and then choosing the best sequence.  This process of looking for such a sequence is called search.  A search algorithm — takes a problem as input and — returns a solution in the form of an action sequence.  Once a solution is found, the actions it recommends can be carried out. This is called the execution phase.  This is a simple “formulate, search, execute” design for the agent. Faculty of computer and information sciences Mansoura University Information System Department Restricted Form of General Agent Faculty of computer and information sciences Mansoura University Information System Department Well-defined Problems and Solutions  A problem can be defined formally by four components: — The initial state that the agent starts in. — A description of the possible actions available to the agent. — successor function: Given a particular state x, SUCCESSOR- FN(x) returns a set of (action, successor) ordered pairs, where each action is one of the legal actions in state x and each successor is a state that can be reached from x by applying the action. — The goal test, which determines whether a given state is a goal state. — A path cost function that assigns a numeric cost to each path. Faculty of computer and information sciences Mansoura University Information System Department Notes  A solution to a problem is a path from the initial state to goal state.  Solution quality is measured by the path cost function.  The optimal solution has the lowest path cost among all solutions.  The state space: the set of all states reachable from the initial state.  A path in the state space is a sequence of states connected by a sequence of actions.  The step cost of taking action a to go from state x to state y is denoted by 𝑐(𝑥, 𝑎, 𝑦). Faculty of computer and information sciences Mansoura University Information System Department Example: Romania  On holiday in Romania; currently in Arad.  Flight leaves tomorrow from Bucharest.  Formulate Goal: be in Bucharest.  Formulate Problem: — States: various cities. — Actions: drive between cities.  Find Solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest. Faculty of computer and information sciences Mansoura University Information System Department Well-defined Problems and Solutions Faculty of computer and information sciences Mansoura University Information System Department Outlines  Problem-solving Agents  Well-defined Problems and Solutions  Problem Types  Example Problems  Toy Problems  Real-world Problems  Searching for Solutions  Tree Search Algorithms  Uninformed Search Strategies  Avoiding repeated states Faculty of computer and information sciences Mansoura University Information System Department Example: Vacuum World  This vacuum world has just two locations: square A and B.  The vacuum agent perceives which square it is in and whether there is dirt in the square.  It can choose to move left, move right, suck up the dirt, or do nothing.  One very simple agent function is the following: if the current square is dirty, then suck, otherwise move to the other square. Faculty of computer and information sciences Mansoura University Information System Department Example: Vacuum World (cont’d)  States: The agent is in one of two locations. Thus there are 2 × 22 = 8 possible world states.  Initial State: Any state can be designed as the initial state.  Successor function: This generate the legal states that result from trying the three actions (Left, Right, Suck).  Goal Test: This checks whether all the squares are clean.  Path Cost: Each step costs 1, so the path cost is the number of steps in the path. Faculty of computer and information sciences Mansoura University Information System Department Example: 8 Queens  The goal of 8-queens problem is to place 8 queens on chessboard such that no queens attacks any other ( a queen attacks any piece in the same row, column or diagonal)  An incremental formulation involves operators that augment the state description, starting with an empty state.  For the 8-queens problem, this means that each action add queen to the state.  A complete state formulation starts with the 8-queens on the board and move them around. Faculty of computer and information sciences Mansoura University Information System Department Example: 8 Queens (cont’d)  Incremental formulation: — States: Any arrangement of 0 to 8 queens on the board is a state. — Initial State: No queens on the board. — Successor Function: Add a queen to any empty square that is not attacked. — Goal Test: 8 queens are on the board, none attached.  A complete-state formulation starts with all 8 queens on the board and move them around. Faculty of computer and information sciences Mansoura University Information System Department Example: 8 puzzle  A 3 × 3 board with numbered tiles and a blank space.  A tile adjacent to the blank space can slide into the space. The objective is to reach a specified goal state.  The formulation is: — States: A state description specifies the location of each of the 8 tiles and the blank in one of the 9 squares. — Initial state: Any state can be designated as the initial state. — Successor Function: The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or Down. Different subsets of these are possible depending on where the blank is. — Goal test: This checks whether the state matches the goal configuration — Path cost: Each step costs 1, so the path cost is the number of steps in the path Faculty of computer and information sciences Mansoura University Information System Department Example: Route-finding  Route-finding problem is defined in terms of specified locations and transitions along links between them.  Route-finding algorithms are used in a variety of applications, such as routing in computer networks, and airline travel planning systems.  Consider the airline travel problems that must be solved by a travel-planning Web site: — States: Each state obviously includes a location (e.g., an airport) and the current time. — Initial state: This is specified by the user’s query. — Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for within-airport transfer if needed. — Goal test: Are we at the final destination specified by the user? — Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on. Faculty of computer and information sciences Mansoura University Information System Department Outlines  Problem-solving Agents — Well-defined Problems and Solutions — Problem Types  Example Problems — Toy Problems — Real-world Problems  Searching for Solutions — Tree Search Algorithms  Uninformed Search Strategies  Avoiding repeated states Faculty of computer and information sciences Mansoura University Information System Department Searching for solutions  Having formulated some problems, we now need to solve them. This is done by a search through the state space.  This chapter deals with search techniques that use an explicit search tree that is generated by the initial state and the successor function that together define the state space.  Figure 3.6 shows some of the expansions in the search tree for finding a route from Arad to Bucharest.  The root of the search tree is a search node corresponding to the initial state, In(Arad)  The first step is to test whether this is a goal state. Because this is not a goal state, we need to consider some other states. This is done by expanding the current state; that is, applying the successor function to the current state, thereby generating a new set of states. In this case, we get three new states: In(Sibiu), In(Timisoara), and In(Zerind). Faculty of computer and information sciences Mansoura University Information System Department Searching for solutions (cont’d) Faculty of computer and information sciences Mansoura University Information System Department Searching for solutions (cont’d)  Now we must choose which of these three possibilities to consider further.  The choice of which state to expand is determined by the search strategy.  It is important to distinguish between the state space and the search tree. For the route finding problem, there are only 20 states in the state space, one for each city. But there are an infinite number of paths in this state space, so the search tree has an infinite number of nodes. Faculty of computer and information sciences Mansoura University Information System Department Searching for solutions (cont’d)  There are many ways to represent nodes, but we will assume that a node is a data structure with five components: — STATE: the state in the state space to which the node corresponds; — PARENT-NODE: the node in the search tree that generated this node; — ACTION: the action that was applied to the parent to generate the node; — PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state to the node, as indicated by the parent pointers; and — DEPTH: the number of steps along the path from the initial state. Faculty of computer and information sciences Mansoura University Information System Department Searching for solutions (cont’d)  A node is a bookkeeping data structure used to represent the search tree.  A state corresponds to a configuration of the world.  We also need to represent the collection of nodes that have been generated but not yet expanded - this collection is called the fringe. Each element of the fringe is a leaf node, that is, a node with no successors in the tree.  The search strategy then would be a function that selects the next node to be expanded from this set.  There are two types of search algorithms: — Uninformed Search Algorithms (blind Search) — Informed Search Algorithms (heuristic search) Faculty of computer and information sciences Mansoura University Information System Department Searching for solutions (cont’d)  Measuring problem-solving performance: We will evaluate an algorithm's performance in four ways: — Completeness: Is the algorithm guaranteed to find a solution when there is one? — Optimality: Does the strategy find the optimal solution — Time complexity: How long does it take to find a solution? — Space complexity: How much memory is needed to perform the search?  complexity is expressed in terms of three quantities: — b, the branching factor or maximum number of successors of any node; — d, the depth of the shallowest goal node; and — m, the maximum length of any path in the state space.  Time is often measured in terms of the number of nodes generated during the search, and space in terms of the maximum number of nodes stored in memory.

Use Quizgecko on...
Browser
Browser