Solving Problems by Searching (Part 1) PDF

Summary

This document provides an introduction to solving problems through search in Artificial Intelligence. The content explores agent structures, problem-solving agents, various search algorithms, and different agent types. It focuses on the concept of agents and their structure in AI.

Full Transcript

Solving Problems by Searching Part – 1 Dr. Oybek Eraliev, Phd Department of Computer Engineering Inha University In Tashkent. Email: oybekeraliev7@gmail...

Solving Problems by Searching Part – 1 Dr. Oybek Eraliev, Phd Department of Computer Engineering Inha University In Tashkent. Email: [email protected] Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 1 Content ØThe Structure of Agents ØProblem-Solving Agents ØExample Problems ØSearch Algorithms ØUninformed Search Strategies ØInformed (Heuristic) Search Strategies Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 2 The Structure of Agents Ø The job of AI is to design an agent program that implements the agent function— the mapping from percepts to actions. Ø We assume this program will run on some sort of computing device with physical sensors and actuators—we call this the agent architecture: agent = architecture + program Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 3 The Structure of Agents ØThe architecture might be just an ordinary PC, or it might be a robotic car with several onboard computers, cameras, and other sensors. ØIn general, the architecture makes the percepts from the sensors available to the program, runs the program, and feeds the program’s action choices to the actuators as they are generated. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 4 The Structure of Agents Agent programs ØThe agent programs take the current percept as input from the sensors and return an action to the actuators. ØThe difference between agent program and agent function: Ø Agent program takes the current percept as input; Ø Agent function may depend on the entire percept history; Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 5 The Structure of Agents Agent programs We describe the agent programs in the simple pseudocode language. The pseudocode shows a rather trivial agent program that keeps track of the percept sequence and then uses it to index into a table of actions to decide what to do. The table represents the agent function that the agent program embodies. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 6 The Structure of Agents Agent programs Ø The key challenge for AI is to find out how to write programs that, to the extent possible, produce rational behavior from a smallish program rather than from a vast table. Ø There are four basic kinds of agent programs that embody the principles underlying almost all intelligent systems: Simple reflex agents; Goal-based agents; Model-based reflex agents; Utility-based agents. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 7 The Structure of Agents Simple reflex agents Ø The simplest kind of agent is the simple reflex agent. These agents select actions on the basis of the current percept, ignoring the rest of the percept history. Ø For example, the vacuum agent is a simple reflex agent, because its decision is based only on the current location and on whether that location contains dirt. An agent program for this agent is shown in Figure. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 8 The Structure of Agents Simple reflex agents Ø Simple reflex behaviors occur even in more complex environments. Imagine yourself as the driver of the automated taxi. If the car in front brakes and its brake lights come on, then you should notice this and initiate braking. Ø We call such a connection a condition–action rule, written as if car-in-front-is-braking then initiate-braking. Ø Humans also have many such connections, some of which are learned responses (as for driving) and some of which are innate reflexes (such as blinking when something approaches the eye). Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 9 The Structure of Agents Simple reflex agents Ø A more general and flexible approach is first to build a general- purpose interpreter for condition– action rules and then to create rule sets for specific task environments. Ø Figure gives the structure of this general program in schematic form, showing how the condition– action rules allow the agent to make the connection from percept to action. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 10 The Structure of Agents Simple reflex agents Ø An agent program is shown in Figure. Ø The INTERPRET-INPUT function generates an abstracted description of the current state from the percept, and the RULE-MATCH function returns the first rule in the set of rules that matches the given state description. Ø Note that the description in terms of “rules” and “matching” is purely conceptual; as noted above, actual implementations can be as simple as a collection of logic gates implementing a Boolean circuit. Ø Alternatively, a “neural” circuit can be used, where the logic gates are replaced by the nonlinear units of artificial neural networks. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 11 The Structure of Agents Simple reflex agents Ø Simple reflex agents have the admirable property of being simple, but they are of limited intelligence. Ø The agent will work only if the correct decision can be made on the basis of just the current percept—that is, only if the environment is fully observable. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 12 The Structure of Agents Simple reflex agents Ø Even a little bit of unobservability can cause serious trouble. Ø For example, a simple reflex vacuum agent is deprived of its location sensor and has only a dirt sensor. Ø Such an agent has just two possible percepts: [Dirty] and [Clean]. Ø It can Suck in response to [Dirty]; what should it do in response to [Clean]? Ø Moving Left fails (forever) if it happens to start in square A, and moving Right fails (forever) if it happens to start in square B. Ø Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 13 The Structure of Agents Simple reflex agents Ø Escape from infinite loops is possible if the agent can randomize its actions. Ø For example, if the vacuum agent perceives [Clean], it might flip a coin to choose between Right and Left. Ø It is easy to show that the agent will reach the other square in an average of two steps. Then, if that square is dirty, the agent will clean it and the task will be complete. Ø Hence, a randomized simple reflex agent might outperform a deterministic simple reflex agent. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 14 The Structure of Agents Model-based reflex agents Ø The most effective way to handle partial observability is for the agent to keep track of the part of the world it can’t see now. That is, the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. Ø Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the agent program in some form. Ø First, we need some information about how the world changes over time, which can be divided roughly into two parts: the effects of the agent’s actions and how the world evolves independently of the agent. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 15 The Structure of Agents Model-based reflex agents Ø This knowledge about “how the world works”—whether implemented in simple Boolean circuits or in complete scientific theories—is called a transition model of the world. Ø Second, we need some information about how the state of the world is reflected in the agent’s percepts. This kind of knowledge is called a sensor model. Ø Together, the transition model and sensor model allow an agent to keep track of the state of the world—to the extent possible given the limitations of the agent’s sensors. An agent that uses such models is called a model-based agent. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 16 The Structure of Agents Model-based reflex agents Figure gives the structure of the model-based reflex agent with internal state, showing how the current percept is combined with the old internal state to generate the updated description of the current state, based on the agent’s model of how the world works. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 17 The Structure of Agents Model-based reflex agents The agent program is shown in Figure. The interesting part is the function UPDATE-STATE, which is responsible for creating the new internal state description. The details of how models and states are represented vary widely depending on the type of environment and the particular technology used in the agent design. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 18 The Structure of Agents Goal-based agents Ø Knowing something about the current state of the environment is not always enough to decide what to do. Ø For example, at a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where the taxi is trying to get to. Ø In other words, as well as a current state description, the agent needs some sort of goal information that describes situations that are desirable—for example, being at a particular destination. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 19 The Structure of Agents Goal-based agents The agent program can combine this with the model (the same information as was used in the model-based reflex agent) to choose actions that achieve the goal. Figure shows the goal-based agent’s structure. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 20 The Structure of Agents Utility-based agents Ø Goals alone are not enough to generate high-quality behavior in most environments. Ø For example, many action sequences will get the taxi to its destination, but some are quicker, safer, more reliable, or cheaper than others. Ø Goals just provide a crude binary distinction between “happy” and “unhappy” states. Ø A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent. Because “happy” does not sound very scientific, economists and computer scientists use the term utility instead. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 21 The Structure of Agents Utility-based agents The utility-based agent structure appears in Figure. A general learning agent. The “performance element” box represents what we have previously considered to be the whole agent program. Now, the “learning element” box gets to modify that program to improve its performance. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 22 The Structure of Agents Learning agents In many areas of AI, this is now the preferred method for creating state- of-the-art systems. Any type of agent (model-based, goal-based, utility- based, etc.) can be built as a learning agent (or not). A learning agent can be divided into four conceptual components, as shown in Figure. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 23 The Structure of Agents Learning agents Ø The most important distinction is between the learning element, which is responsible for making improvements, and the performance element, which is responsible for selecting external actions. Ø The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions. Ø The learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 24 The Structure of Agents Learning agents Ø The last component of the learning agent is the problem generator. It is responsible for suggesting actions that will lead to new and informative experiences. Ø If the performance element had its way, it would keep doing the actions that are best, given what it knows, but if the agent is willing to explore a little and do some perhaps suboptimal actions in the short run, it might discover much better actions for the long run. Ø The problem generator’s job is to suggest these exploratory actions. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 25 The Structure of Agents Summary Ø Simple reflex agents respond directly to percepts, whereas model-based reflex agents maintain internal state to track aspects of the world that are not evident in the current percept. Ø Goal-based agents act to achieve their goals, and utility-based agents try to maximize their own expected “happiness.” Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 26 Content ØThe Structure of Agents ØProblem-Solving Agents ØExample Problems ØSearch Algorithms ØUninformed Search Strategies ØInformed (Heuristic) Search Strategies Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 27 Problem-Solving Agents Ø When the correct action to take is not immediately obvious, an agent may need to to plan ahead: to consider a sequence of actions that form a path to a goal state. Ø Such an agent is called a problem-solving agent, and the computational process it undertakes is called search. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 28 Problem-Solving Agents A simplified road map of part of Romania, with road distances in miles. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 29 Problem-Solving Agents With that information, the agent can follow this four-phase problem-solving process: Ø Goal formulation: The agent adopts the goal of reaching Bucharest. Goals organize behavior by limiting the objectives and hence the actions to be considered. Ø Problem formulation: The agent devises a description of the states and actions necessary to reach the goal—an abstract model of the relevant part of the world. For our agent, one good model is to consider the actions of traveling from one city to an adjacent city, and therefore the only fact about the state of the world that will change due to an action is the current city. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 30 Problem-Solving Agents With that information, the agent can follow this four-phase problem-solving process: Ø Search: Before taking any action in the real world, the agent simulates sequences of actions in its model, searching until it finds a sequence of actions that reaches the goal. Such a sequence is called a solution. The agent might have to simulate multiple sequences that do not reach the goal, but eventually it will find a solution (such as going from Arad to Sibiu to Fagaras to Bucharest), or it will find that no solution is possible. Ø Execution: The agent can now execute the actions in the solution, one at a time. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 31 Problem-Solving Agents Ø It is an important property that in a fully observable, deterministic, known environment, the solution to any problem is a fixed sequence of actions: drive to Sibiu, then Fagaras, then Bucharest. Ø If the model is correct, then once the agent has found a solution, it can ignore its percepts while it is executing the actions—closing its eyes, so to speak— because the solution is guaranteed to lead to the goal. Ø Control theorists call this an open-loop system: ignoring the percepts breaks the loop between agent and environment. Ø If there is a chance that the model is incorrect, or the environment is nondeterministic, then the agent would be safer using a closed-loop approach that monitors the percepts. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 32 Problem-Solving Agents Search problems and solutions A search problem can be defined formally as follows: Ø A set of possible states that the environment can be in. We call this the state space. Ø The initial state that the agent starts in. For example: Arad. Ø A set of one or more goal states. Sometimes there is one goal state (e.g., Bucharest), sometimes there is a small set of alternative goal states, and sometimes the goal is defined by a property that applies to many states (potentially an infinite number). For example, in a vacuum-cleaner world, the goal might be to have no dirt in any location, regardless of any other facts about the state. We can account for all three of these possibilities by specifying an IS-GOAL method for a problem. In this chapter we will sometimes say “the goal” for simplicity, but what we say also applies to “any one of the possible goal states.” Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 33 Problem-Solving Agents Search problems and solutions A search problem can be defined formally as follows: Ø The actions available to the agent. Given a state s, ACTIONS(s) returns a finite2 set of actions that can be executed in s. We say that each of these actions is applicable in s. Ø A transition model, which describes what each action does. RESULT(s, a) returns the Ø state that results from doing action a in state s. For example, RESULT(Arad, ToZerind) = Zerind. Ø An action cost function, denoted by ACTION-COST(s,a,s′) when we are programming or c(s,a,s′) when we are doing math, that gives the numeric cost of applying action a in state s to reach state s′. A problem-solving agent should use a cost function that reflects its own performance measure; for example, for route-finding agents, the cost of an action might be the length in miles, or it might be the time it takes to complete the action. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 34 Problem-Solving Agents Search problems and solutions Ø A sequence of actions forms a path, and a solution is a path from the initial state to a goal state. We assume that action costs are additive; that is, the total cost of a path is the sum of the individual action costs. Ø An optimal solution has the lowest path cost among all solutions. In this chapter, we assume that all action costs will be positive, to avoid certain complications. Ø The state space can be represented as a graph in which the vertices are states and the directed edges between them are actions. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 35 Problem-Solving Agents Formulating problems Ø Our formulation of the problem of getting to Bucharest is a model—an abstract mathematical description—and not the real thing. Ø The process of removing detail from a representation is called abstraction. A good problem formulation has the right level of detail. If the actions were at the level of “move the right foot forward a centimeter” or “turn the steering wheel one degree left,” the agent would probably never find its way out of the parking lot, let alone to Bucharest. Ø The abstraction is valid if we can elaborate any abstract solution into a solution in the more detailed world. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 36 Content ØThe Structure of Agents ØProblem-Solving Agents ØExample Problems ØSearch Algorithms ØUninformed Search Strategies ØInformed (Heuristic) Search Strategies Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 37 Example Problems A standardized problem is intended to A real-world problem, such as robot navigation, is one illustrate or exercise various problem-solving whose solutions people actually use, and whose methods. It can be given a concise, exact formulation is idiosyncratic, not standardized, because, description and hence is suitable as a for example, each robot has different sensors that benchmark for researchers to compare the produce different data. performance of algorithms. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 38 Example Problems Standardized problem Ø A grid world problem is a two-dimensional rectangular array of square cells in which agents can move from cell to cell. Ø Typically, the agent can move to any obstacle-free adjacent cell— horizontally or vertically and in some problems diagonally. Ø Cells can contain objects, which the agent can pick up, push, or otherwise act upon; a wall or other impassible obstacle in a cell prevents an agent from moving into that cell. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 39 Example Problems Standardized problem The vacuum world can be formulated as a grid world problem as follows: States: A state of the world says which objects are in which cells. For the vacuum world, the objects are the agent and any dirt. In the simple two- cell version, the agent can be in either of the two cells, and each call can either contain dirt or not, so there are The state-space graph for the two-cell vacuum 2 · 2 · 2 = 8 states. In general, a world. There are 8 states and three actions for vacuum environment with n cells has each state: L = Left, R = Right, S = Suck. n·2n states. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 40 Example Problems Standardized problem The vacuum world can be formulated as a grid world problem as follows: Ø Initial state: Any state can be designated as the initial state. Ø Actions: In the two-cell world there are three actions: Suck, move Left, and move Right. In a two-dimensional multi-cell world, there are more movement actions—for example, Forward, Backward, TurnRight, and TurnLeft. Ø Transition model: Suck removes any dirt from the agent’s cell; Forward moves the agent ahead one cell in the direction it is facing, unless it hits a wall. Backward moves the agent in the opposite direction, while TurnRight and TurnLeft change the direction it is facing by 90◦. Ø Goal states: The states in which every cell is clean. Ø Action cost: Each action costs 1. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 41 Example Problems Standardized problem Ø Another type of grid world is the sokoban puzzle, in which the agent’s goal is to push a number of boxes, scattered about the grid, to designated storage locations. There can be at most one box per cell. Ø For a world with n non-obstacle cells and b boxes, there are n × n!/(b!(n − b)!) states; for example, on an 8 × 8 grid with a dozen boxes, there are over 200 trillion states. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 42 Example Problems Real-world problems Ø We have already seen how the route-finding problem is defined in terms of specified lo- cations and transitions along edges between them. Ø Route-finding algorithms are used in a variety of applications. Ø Some, such as Web sites and in-car systems that provide driving directions. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 43 Example Problems Real-world problems Consider the airline travel problems that must be solved by a travel-planning Web site: Ø States: Each state obviously includes a location (e.g., an airport) and the current time. Ø Initial state: The user’s home airport. Ø Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for within-airport transfer if needed. Ø Transition model: The state resulting from taking a flight will have the flight’s destination as the new location and the flight’s arrival time as the new time. Ø Goal state: A destination city. Ø Action cost: A combination of monetary cost, waiting time, flight time, customs and immigration procedures, seat quality, time of day, type of airplane, frequent-flyer reward points, and so on. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 44 Example Problems Real-world problems Other problems: Ø Touring problems Ø A VLSI layout problem Ø Robot navigation Ø Automatic assembly sequencing Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 45 Content ØThe Structure of Agents ØProblem-Solving Agents ØExample Problems ØSearch Algorithms ØUninformed Search Strategies ØInformed (Heuristic) Search Strategies Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 46 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 47 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 48 Solving Problems by Searching Part – 2 Dr. Oybek Eraliev, Phd Department of Computer Engineering Inha University In Tashkent. Email: [email protected] Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 1 Content ØThe Structure of Agents ØProblem-Solving Agents ØExample Problems ØSearch Algorithms ØUninformed Search Strategies ØInformed (Heuristic) Search Strategies Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 2 Search Algorithms Ø A search algorithm takes a search problem as input and returns a solution, or an indication of failure. Ø We consider algorithms that superimpose a search tree over the state-space graph, forming various paths from the initial state, trying to find a path that reaches a goal state. Ø Each node in the search tree corresponds to a state in the state space and the edges in the search tree correspond to actions. The root of the tree corresponds to the initial state of the problem. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 3 Search Algorithms Ø It is important to understand the distinction between the state space and the search tree. Ø The state space describes the set of states in the world, and the actions that allow transitions from one state to another. Ø The search tree describes paths between these states, reaching towards the goal. The search tree may have multiple paths to any given state, but each node in the tree has a unique path back to the root. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 4 Search Algorithms A simplified road map of part of Romania, with road distances in miles. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 5 Search Algorithms Ø Figure shows the first few steps in finding a path from Arad to Bucharest. The root node of the search tree is at the initial state, Arad. Ø We can expand the node, by considering the available ACTIONS for that state, using the RESULT function to see where those actions lead to, and generating a new node (called a child node or successor node) for each of the resulting states. Ø Each child node has Arad as its parent node. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 6 Search Algorithms Ø Now we must choose which of these three child nodes to consider next. This is the essence of search—following up one option now and putting the others aside for later. Ø Suppose we choose to expand Sibiu first. Figure shows the result: a set of 6 unexpanded nodes. Ø We call this the frontier of the search tree. We say that any state that has had a node generated for it has been reached. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 7 Search Algorithms ØNote that the frontier separates two regions of the state-space graph: an interior region where every state has been expanded, and an exterior region of states that have not yet been reached. ØThis property is illustrated in Figure. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 8 Search Algorithms Best-first Search Ø A very general approach is called best-first search, in which we choose a node, n, with minimum value of some evaluation function, f (n). Ø Figure shows the algorithm. On each iteration we choose a node on the frontier with minimum f(n) value, return it if its state is a goal state, and otherwise apply EXPAND to generate child nodes. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 9 Search Algorithms Best-first Search Ø Each child node is added to the frontier if it has not been reached before or is re-added if it is now being reached with a path that has a lower path cost than any previous path. Ø The algorithm returns either an indication of failure, or a node that represents a path to a goal. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 10 Search Algorithms Best-first Search Ø By employing different f (n) functions, we get different specific algorithms, which this lecture will cover. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 11 Search Algorithms Best-first Search Search data structures Search algorithms require a data structure to keep track of the search tree. A node in the tree is represented by a data structure with four components: Ø node.STATE: the state to which the node corresponds; Ø node.PARENT: the node in the tree that generated this node; Ø node.ACTION: the action that was applied to the parent’s state to generate this node; Ø node.PATH-COST: the total cost of the path from the initial state to this node. In mathematical formulas, we use g(node) as a synonym for PATH-COST. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 12 Search Algorithms Best-first Search Ø We need a data structure to store the frontier. The appropriate choice is a queue of some kind, because the operations on a frontier are: Ø IS-EMPTY(frontier) returns true only if there are no nodes in the frontier. Ø POP(frontier) removes the top node from the frontier and returns it. Ø TOP(frontier) returns (but does not remove) the top node of the frontier. Ø ADD(node, frontier) inserts node into its proper place in the queue. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 13 Search Algorithms Best-first Search Ø Three kinds of queues are used in search algorithms: Ø A priority queue first pops the node with the minimum cost according to some evaluation function, f. It is used in best-first search. Ø A FIFO queue or first-in-first-out queue first pops the node that was added to the queue first; we shall see it is used in breadth-first search. Ø A LIFO queue or last-in-first-out queue (also known as a stack) pops first the most recently added node; we shall see it is used in depth-first search. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 14 Search Algorithms Best-first Search Measuring problem-solving performance We can evaluate an algorithm’s performance in four ways: Ø Completeness: Is the algorithm guaranteed to find a solution when there is one, and to correctly report failure when there is not? Ø Cost optimality: Does it find a solution with the lowest path cost of all solutions?7 Ø Time complexity: How long does it take to find a solution? This can be measured in seconds, or more abstractly by the number of states and actions considered. Ø Space complexity: How much memory is needed to perform the search? Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 15 Content ØThe Structure of Agents ØProblem-Solving Agents ØExample Problems ØSearch Algorithms ØUninformed Search Strategies ØInformed (Heuristic) Search Strategies Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 16 Search Algorithms Greedy Best-first Search Informed Search A* Search Search Algorithms Breadth-first Search Depth-first Search Uninformed Search Depth-limited Search Bidirectional Search Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 17 Uninformed Search Strategies Ø An uninformed search algorithm is given no clue about how close a state is to the goal(s). Ø For example, consider our agent in Arad with the goal of reaching Bucharest. Ø An uninformed agent with no knowledge of Romanian geography has no clue whether going to Zerind or Sibiu is a better first step. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 18 Search Strategies Representation Find a path from A to E. Frontier A C B E A D B C Start with a frontier that contains the initial state. D Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. If node contains goal state, return the solution. E F Expand node, add resulting nodes to the frontier. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 19 Search Strategies Representation Find a path from A to E. Frontier A A B C D B C Start with a frontier that contains the initial state. D Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. If node contains goal state, return the solution. E F Expand node, add resulting nodes to the frontier. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 20 Search Strategies Representation Revised Approach Start with a frontier that contains the initial state. Start with an empty explored set Repeat: If the frontier is empty, then no solution. Remove a node from the frontier. If node contains goal state, return the solution. Add the node to the explored set. Expand node, add resulting nodes to the frontier if they aren’t already in the frontier or the explored set. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 21 Search Strategies Representation Find a path from A to E. Stack Last-in first-out data type Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 22 Search Strategies Representation Find a path from A to E. Frontier A EC A B D F B C Explored Set D A B D F C E F Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 23 Search Strategies Representation Find a path from A to E. Depth-first Search Search algorithm that always expands the deepest node in the frontier Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 24 Search Strategies Representation Find a path from A to E. Breadth-first Search Search algorithm that always expands the shallowest node in the frontier Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 25 Search Strategies Representation Find a path from A to E. queue First-in first-out data type Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 26 Uninformed Search Strategies Breadth-first Search Ø When all actions have the same cost, an appropriate strategy is breadth- first search, in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. Ø This is a systematic search strategy that is therefore complete even on infinite state spaces. Ø We could implement breadth-first search as a call to BEST-FIRST-SEARCH where the evaluation function f(n) is the depth of the node—that is, the number of actions it takes to reach the node. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 27 Uninformed Search Strategies Breadth-first Search Ø Figure shows the algorithm with the early- goal efficiency enhancements. Ø Breadth-first search always finds a solution with a minimal number of actions Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 28 Search Strategies Representation Find a path from A to E Frontier A B C A D E F B C Explored Set D A B C D E F Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 29 Search Strategies Representation Maze problem -> Depth-First Search B A Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 30 Search Strategies Representation Maze problem -> Depth-First Search Depth-first search does not B guarantee to find the best solution. A Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 31 Search Strategies Representation Maze problem -> Breadth-First Search B A Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 32 Search Strategies Representation Maze problem -> Breadth-First Search B A Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 33 Content ØThe Structure of Agents ØProblem-Solving Agents ØExample Problems ØSearch Algorithms ØUninformed Search Strategies ØInformed (Heuristic) Search Strategies Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 34 Informed (Heuristic) Search Strategies ØUninformed search Search strategy that uses no problem-specific knowledge ØInformed search Search strategy that uses problem-specific knowledge to find solutions more efficiently Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 35 Informed (Heuristic) Search Strategies Greedy best-first search Ø Greedy best-first search is a form of best-first search that expands first the node with the lowest h(n) value—the node that appears to be closest to the goal—on the grounds that this is likely to lead to a solution quickly. Ø So the evaluation function f (n) = h(n). Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 36 Informed (Heuristic) Search Strategies Heuristic function. Manhattan distance. B D C A Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 37 Informed (Heuristic) Search Strategies Heuristic function. Manhattan distance. 11 9 7 3 2 B 12 10 8 7 6 4 1 13 12 11 9 7 6 5 2 13 10 8 6 3 14 13 12 11 9 7 6 5 4 13 10 A 16 15 14 11 10 9 8 7 6 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 38 Informed (Heuristic) Search Strategies Greedy best-first search 11 9 7 3 2 B 12 10 8 7 6 4 1 13 12 11 9 7 6 5 2 13 10 8 6 3 14 13 12 11 9 7 6 5 4 13 10 A 16 15 14 11 10 9 8 7 6 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 39 Informed (Heuristic) Search Strategies Greedy best-first search 10 9 8 7 6 5 4 3 2 1 B 11 1 12 10 9 8 7 6 5 4 2 13 11 5 3 14 13 12 10 9 8 7 6 4 13 11 5 A 16 15 14 12 11 10 9 8 7 6 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 40 Informed (Heuristic) Search Strategies Greedy best-first search 10 9 8 7 6 5 4 3 2 1 B 11 1 12 10 9 8 7 6 5 4 2 13 11 5 3 14 13 12 10 9 8 7 6 4 13 11 5 A 16 15 14 12 11 10 9 8 7 6 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 41 Informed (Heuristic) Search Strategies A* search A* search algorithm that expands node with lowest value of g(n)+h(n) g(n) = cost to reach node h(n) = estimated cost to goal Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 42 Informed (Heuristic) Search Strategies A* search 10 11+10 9 12+9 8 13+8 7 14+7 6 15+6 5 16+5 4 17+4 3 18+3 2 19+2 1 20+1 B 11 10+11 1 12 9+12 10 7+10 9 8+9 8 9+8 7 10+7 6 11+6 5 12+5 4 13+4 2 13 8+13 11 6+11 5 14+5 3 14 7+14 13 6+13 12 5+12 10 9 8 7 6 15+6 4 13 4+13 11 5 A 16 1+16 15 2+15 14 3+14 12 11 10 9 8 7 6 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 43 Content ØThe Structure of Agents ØProblem-Solving Agents ØExample Problems ØSearch Algorithms ØUninformed Search Strategies ØInformed (Heuristic) Search Strategies Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 44 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 45 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 46 Search in Complex Environments Part – 1 Dr. Oybek Eraliev, Department of Computer Engineering Inha University In Tashkent. Email: [email protected] Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 1 Content ØLocal Search and Optimization Problems ØLocal Search in Continuous Spaces ØSearch with Nondeterministic Actions ØSearch in Partially Observable Environments ØOnline Search Agents and Unknown Environments Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 2 Local Search and Optimization Problems Ø Local search algorithms operate by searching from a start state to neighboring states, without keeping track of the paths, nor the set of states that have been reached. ØThat means they are not systematic—they might never explore a portion of the search space where a solution actually resides. ØHowever, they have two key advantages: 1. they use very little memory; 2. they can often find reasonable solutions in large or infinite state spaces for which systematic algorithms are unsuitable. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 3 Local Search and Optimization Problems ØLocal search algorithms can also objective function solve optimization problems, in which the aim is to find the best state according to an objective function. ØTo understand local search, consider the states of a problem laid out in a state-space landscape, as shown in the Figure. current state state space Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 4 Local Search and Optimization Problems ØEach point (state) in the landscape objective function global maximum has an “elevation,” defined by the value of the objective function. If elevation corresponds to an sholder objective function, then the aim is to local maximum find the highest peak—a global Flat local maximum maximum—and we call the process hill climbing. ØIf elevation corresponds to cost, then the aim is to find the lowest valley—a global minimum—and we call it gradient descent. current state state space Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 5 Local Search and Optimization Problems Hill-climbing search The hill-climbing search algorithm is shown in Figure. It keeps track of one current state and on each iteration moves to the neighboring state with highest value—that is, it heads in the direction that provides the steepest ascent. It terminates when it reaches a “peak” where no neighbor has a higher value. Hill climbing does not look ahead beyond the immediate neighbors of the current state. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 6 Local Search and Optimization Problems Hill-climbing search ØNote that one way to use hill-climbing search is to use the negative of a heuristic cost function as the objective function; that will climb locally to the state with smallest heuristic distance to the goal. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 7 Local Search and Optimization Problems Hill-climbing search ØHill climbing is sometimes called greedy local search because it grabs a good neighbor state without thinking ahead about where to go next. ØAlthough greed is considered one of the seven deadly sins, it turns out that greedy algorithms often perform quite well. ØHill climbing can make rapid progress toward a solution because it is usually quite easy to improve a bad state. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 8 Local Search and Optimization Problems Hill-climbing search Unfortunately, hill climbing can get stuck for any of the following reasons: Ø Local maxima: A local maximum is a peak that is higher than each of its neighboring states but lower than the global maximum. Ø Ridges: Ridges result in a sequence of local maxima that is very difficult for greedy algorithms to navigate. Ø Plateaus: A plateau is a flat area of the state-space landscape. It can be a flat local maximum, from which no uphill exit exists, or a shoulder, from which progress is possible. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 9 Local Search and Optimization Problems Hill-climbing search ØHow could we solve more problems? ØOne answer is to keep going when we reach a plateau—to allow a sideways move in the hope that the plateau is really a shoulder. But if we are actually on a flat local maximum, then this approach will wander on the plateau forever. Therefore, we can limit the number of consecutive sideways moves, stopping after, say, 100 consecutive sideways moves. ØStochastic hill climbing chooses at random from among the uphill moves; the probability of selection can vary with the steepness of the uphill move. This usually converges more slowly than steepest ascent, but in some state landscapes, it finds better solutions. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 10 Local Search and Optimization Problems Hill-climbing search ØHow could we solve more problems? ØFirst-choice hill climbing implements stochastic hill climbing by generating successors randomly until one is generated that is better than the current state. This is a good strategy when a state has many (e.g., thousands) of successors. ØAnother variant is random-restart hill climbing, which adopts the adage, “If at first you don’t succeed, try, try again.” It conducts a series of hill-climbing searches from randomly generated initial states, until a goal is found. It is complete with probability 1, because it will eventually generate a goal state as the initial state. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 11 Local Search and Optimization Problems Hill-climbing search ØHow could we solve more problems? ØIllustration of why ridges cause difficulties for hill climbing. The grid of states is superimposed on a ridge rising from left to right, creating a sequence of local maxima that are not directly connected to each other. From each local maximum, all the available actions point downhill. ØTopologies like this are common in low-dimensional state spaces, such as points in a two-dimensional plane. But in state spaces with hundreds or thousands of dimensions, this intuitive picture does not hold, and there are usually at least a few dimensions that make it possible to escape from ridges and plateaus. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 12 Local Search and Optimization Problems Simulated annealing ØA hill-climbing algorithm that never makes “downhill” moves toward states with lower value (or higher cost) is always vulnerable to getting stuck in a local maximum. ØIn contrast, a purely random walk that moves to a successor state without concern for the value will eventually stumble upon the global maximum but will be extremely inefficient. ØTherefore, it seems reasonable to try to combine hill climbing with a random walk in a way that yields both efficiency and completeness. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 13 Local Search and Optimization Problems Simulated annealing Ø Simulated annealing is such an algorithm. ØIn metallurgy, annealing is the process used to temper or harden metals and glass by heating them to a high temperature and then gradually cooling them, thus allowing the material to reach a low-energy crystalline state. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 14 Local Search and Optimization Problems Simulated annealing ØThe overall structure of the simulated- annealing algorithm is similar to hill climbing. Instead of picking the best move, however, it picks a random move. If the move improves the situation, it is always accepted. Otherwise, the algorithm accepts the move with some probability less than 1. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 15 Local Search and Optimization Problems Simulated annealing The simulated annealing algorithm, a version of stochastic hill climbing where some downhill moves are allowed. The schedule input determines the value of the “temperature” T as a function of time. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 16 Local Search and Optimization Problems Local beam search ØKeeping just one node in memory might seem to be an extreme reaction to the problem of memory limitations. ØThe local beam search algorithm keeps track of k states rather than just one. It begins with k randomly generated states. At each step, all the successors of all k states are generated. If any one is a goal, the algorithm halts. Otherwise, it selects the k best successors from the complete list and repeats. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 17 Local Search and Optimization Problems Local beam search ØLocal beam search can suffer from a lack of diversity among the k states—they can become clustered in a small region of the state space, making the search little more than a k-times-slower version of hill climbing. ØA variant called stochastic beam search, analogous to stochastic hill climbing, helps alleviate this problem. Instead of choosing the top k successors, stochastic beam search chooses successors with probability proportional to the successor’s value, thus increasing diversity. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 18 Local Search and Optimization Problems Evolutionary algorithms Ø Evolutionary algorithms can be seen as variants of stochastic beam search that are explicitly motivated by the metaphor of natural selection in biology: there is a population of individuals (states), in which the fittest (highest value) individuals produce offspring (successor states) that populate the next generation, a process called recombination. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 19 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithms are similar to stochastic beam search, but with the addition of the crossover operation. ØThis is advantageous if there are blocks that perform useful functions. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 20 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: ØGenetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). ØGenetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 21 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: Ø A population is a group of individuals or chromosomes, and each individual is a candidate solution to the problem. Ø A chromosome is an individual that contains a set of parameters known as genes. Ø A chromosome contains a list of parameters , this parameters we call them genes. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 22 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: Ø Encoding Methods : Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 23 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: ØFitness Function : Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 24 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: Ø Termination Criteria : Ø The Reproduction process is repeated until a termination condition has been reached , common terminating conditions are. A solution is found that satisfies minimum criteria. Fixed number of generations reached. Allocated budget (computation time/money) reached. Manual inspection. Combinations of the above. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 25 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: Ø Selection: ØSelection is the process of selecting parents to generate the child we call it also offspring that will be a part of the next generation , there are several selection methods among the most used we have: Ø Elitism Selection: Elitism selection consists of selecting top K chromosomes to pass them to the next generation without making any changes to them. Ø Roulette Wheel Selection: each parent is represented in the wheel with a portion depends on his fitness value, the parent with the best fitness value have the best chance to be selected. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 26 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: Ø Selection: Ø Stochastic Universal Sampling Selection (SUS Selection): Stochastic Universal Sampling is very similar to roulette wheel , but in SUS Selection we use more than one fixed point. Ø Tournament Selection: The first thing we do is we choose a number K that represents the Tournament Pool Size , then we select K individuals from the current population, and we put them into the pool, after this we choose the best individual from our pool. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 27 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: Ø Selection: Ø Rank Selection: when the problem is very close to converge , the individuals in our population have a very close fitness value , if we use Roulette wheel , they will have the same probability to be selected , we need to represents our chromosomes in the wheel using different parameter , instead of using the fitness value we will use the rank, so that the chromosomes that have a good ranking they will have a better chance to be selected. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 28 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: Ø CrossOver : Ø The crossing operation is the process of reproduction of new chromosomes from the parent chromosomes (parents are selected from the old population using a selection method). Ø There are several crossing methods among the most used we have: Ø One Point Crossover: a random Point is chosen to be the crossover point, then we fill the child with genes from both parents. Ø Multi Point Crossover: a random two points are chosen to be the crossover points, then we fill the child with genes from both parents. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 29 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: Ø CrossOver: Ø Davis Order Crossover (OX1): we choose two random crossover points in the first parent, and we copy that segment into the child, then we fill the rest of genes in our child with the genes from the second parent. Ø Uniform Crossover: we flip a coin for each genes in our two parents to decide whether or not it’ll be included in the offspring (child). Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 30 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: ØMutation: the mutation can be defined as a small random modification of the chromosome, to obtain a new solution. It is used to maintain and introduce diversity in the genetic population and is generally applied with a low probability. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 31 Local Search and Optimization Problems Evolutionary algorithms Ø Genetic algorithm: ØMutation: Bit flip Mutation Swap Mutation Scramble Mutation Inversion Mutation Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 32 Local Search and Optimization Problems Genetic algorithm (Example) Ø Objective Function (Fitness function): Consider the function of maximizing the function 𝒇 𝒙 = 𝒙𝟐 where 𝑥 in permitted to vary between 0 to 31. ØSelect Encoding Technique: The minimum value is 0 and maximum value is 31. Using a five-bit binary integer, numbers between 0 (00000) and 31 (11111) can be obtained. ØSelect Initial Population: To start with, select initial population are random. Here initial population of size 4 is chosen, but any number of populations can be selected on the requirement and application. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 33 Local Search and Optimization Problems Genetic algorithm (Example) Initial Fitness Expected Actual String No. X value Prob % Prob Population function Count Count 1 01100 12 144 0.1247 12.47 0.4987 1 2 11001 25 625 0.5411 54.11 2.1645 2 3 00101 5 25 0.0216 2.16 0.0866 0 4 10011 19 181 0.3126 31.26 1.2502 1 Sum 1155 1.0 100 4 4 Average 288.75 0.25 25 1 1 Maximum 625 0.5411 54.11 2.1645 2 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 34 Local Search and Optimization Problems Genetic algorithm (Example) Mating Crossover Offspring after Fitness String No. X value Pool Point crossover Function 1 01100 4 01101 13 169 2 11001 4 11000 24 576 3 11001 2 11011 27 729 4 10011 2 10001 17 289 Sum 1763 Average 440.75 Maximum 729 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 35 Local Search and Optimization Problems Genetic algorithm (Example) Mutation Offspring after Offspring after Fitness String No. Chromosome for X value crossover mutation Function flipping 1 01101 10000 11101 29 841 2 11000 00000 11000 24 576 3 11011 00000 11011 27 729 4 10001 00101 10100 20 400 Sum 2546 Average 626.5 Maximum 841 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 36 Content ØLocal Search and Optimization Problems ØLocal Search in Continuous Spaces ØSearch with Nondeterministic Actions ØSearch in Partially Observable Environments ØOnline Search Agents and Unknown Environments Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 37 Local Search in Continuous Spaces ØA continuous action space has an infinite branching factor, and thus can’t be handled by most of the algorithms we have covered so far (with the exception of first-choice hill climbing and simulated annealing). ØSuppose we want to place three new airports anywhere in Romania, such that the sum of squared straight-line distances from each city on the map to its nearest airport is minimized. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 38 Local Search in Continuous Spaces ØThe state space is then defined by the coordinates of the three airports: (x1,y1), (x2,y2), and (x3,y3). ØThis is a six-dimensional space; we also say that states are defined by six variables. ØIn general, states are defined by an n-dimensional vector of variables, x. Moving around in this space corresponds to moving one or more of the airports on the map. ØThe objective function f(x) = f(x1,y1,x2,y2,x3,y3) is relatively easy to compute for any particular state once we compute the closest cities. ØLet Ci be the set of cities whose closest airport (in the state x) is airport i. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 39 Local Search in Continuous Spaces 𝒇 𝒙 = 𝒇 𝒙𝟏 , 𝒚𝟏 , 𝒙𝟐 , 𝒚𝟐 , 𝒙𝟑 , 𝒚𝟑 = ∑𝟑𝒊F𝟏 ∑𝒄∈𝑪𝒊(𝒙𝒊 − 𝒙𝒄 )𝟐 +(𝒚𝒊 − 𝒚𝒄 )𝟐 Ø This equation is correct not only for the state x but also for states in the local neighborhood of x. Ø However, it is not correct globally; if we stray too far from x (by altering the location of one or more of the airports by a large amount) then the set of closest cities for that airport changes, and we need to recompute Ci. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 40 Local Search in Continuous Spaces Ø One way to deal with a continuous state space is to discretize it. Ø For example, instead of allowing the (xi,yi) locations to be any point in continuous two-dimensional space, we could limit them to fixed points on a rectangular grid with spacing of size δ (delta). Ø Often, we have an objective function expressed in a mathematical form such that we can use calculus to solve the problem analytically rather than empirically. Ø Many methods attempt to use the gradient of the landscape to find a maximum. The gradient of the objective function is a vector ∇f that gives the magnitude and direction of the steepest slope. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 41 Local Search in Continuous Spaces ØFor our problem, we have JK JK JK JK JK JK Ø∇𝑓 = , , JL" JM" JL# JM# JL$ JM$ , , , , Ø In some cases, we can find a maximum by solving the equation ∇ f = 0. Ø In many cases, however, this equation cannot be solved in closed form. Ø For example, with three airports, the expression for the gradient depends on what cities are closest to each airport in the current state. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 42 Local Search in Continuous Spaces ØA final topic is constrained optimization. An optimization problem is constrained if solutions must satisfy some hard constraints on the values of the variables. ØThe difficulty of constrained optimization problems depends on the nature of the constraints and the objective function. The best-known category is that of linear programming problems, in which constraints must be linear inequalities forming a convex set and the objective function is also linear. ØLinear programming is probably the most widely studied and broadly useful method for optimization. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 43 Content ØLocal Search and Optimization Problems ØLocal Search in Continuous Spaces ØSearch with Nondeterministic Actions ØSearch in Partially Observable Environments ØOnline Search Agents and Unknown Environments Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 44 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 45 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 46 Search in Complex Environments Part – 2 Dr. Oybek Eraliev, Department of Computer Engineering Inha University In Tashkent. Email: [email protected] Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 1 Content ØLocal Search and Optimization Problems ØLocal Search in Continuous Spaces ØSearch with Nondeterministic Actions ØSearch in Partially Observable Environments ØOnline Search Agents and Unknown Environments Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 2 Search with Nondeterministic Actions Ø If the next state of the environment is completely determined by the current state and the action executed by the agent(s), then we say the environment is deterministic; otherwise, it is nondeterministic. Ø In principle, an agent need not worry about uncertainty in a fully observable, deterministic environment. If the environment is partially observable, however, then it could appear to be nondeterministic. Ø In partially observable and nondeterministic environments, the solution to a problem is no longer a sequence, but rather a conditional plan that specifies what to do depending on what percepts agent receives while executing the plan. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 3 Search with Nondeterministic Actions The erratic vacuum world Ø The vacuum world has eight states, as shown in Figure. Ø There are three actions—Right, Left, and Suck—and the goal is to clean up all the dirt (states 7 and 8). If the environment is fully observable, deterministic, and completely known, then the problem is easy to solve with any of the algorithms, and the solution is an action sequence. Ø For example, if the initial state is 1, then the action sequence [Suck, Right, Suck] will reach a goal state, 8. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 4 Search with Nondeterministic Actions The erratic vacuum world Ø Now suppose that we introduce nondeterminism in the form of a powerful but erratic vacuum cleaner. In the erratic vacuum world, the Suck action works as follows: o When applied to a dirty square the action cleans the square and sometimes cleans up dirt in an adjacent square, too. o When applied to a clean square the action sometimes deposits dirt on the carpet. Ø To provide a precise formulation of this problem, we need to generalize the notion of a transition model. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 5 Search with Nondeterministic Actions The erratic vacuum world Ø Instead of defining the transition model by a RESULT function that returns a single outcome state, we use a RESULTS function that returns a set of possible outcome states. For example, in the erratic vacuum world, the Suck action in state 1 cleans up either just the current location, or both locations: RESULTS(1,Suck)={5,7} Ø If we start in state 1, no single sequence of actions solves the problem, but the following conditional plan does: [Suck, if State=5 then [Right, Suck] else []]. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 6 Search with Nondeterministic Actions The erratic vacuum world ØHere we see that a conditional plan can contain if–then–else steps; this means that solutions are trees rather than sequences. ØHere the conditional in the if statement tests to see what the current state is; this is something the agent will be able to observe at runtime but doesn’t know at planning time. ØAlternatively, we could have had a formulation that tests the percept rather than the state. Many problems in the real, physical world are contingency problems, because exact prediction of the future is impossible. ØFor this reason, many people keep their eyes open while walking around. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 7 Search with Nondeterministic Actions AND–OR search trees ØIn a deterministic environment, the only branching is introduced by the agent’s own choices in each state: I can do this action or that action. We call these nodes OR nodes. ØIn the vacuum world, for example, at an OR node the agent chooses Left or Right or Suck. In a nondeterministic environment, branching is also introduced by the environment’s choice of outcome for each action. We call these nodes AND nodes. ØFor example, the Suck action in state 1 results in the belief state {5,7}, so the agent would need to find a plan for state 5 and for state 7. These two kinds of nodes alternate, leading to an AND–OR tree as illustrated in Figure. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 8 Search with Nondeterministic Actions AND–OR search trees Ø The first two levels of the search tree for the erratic vacuum world. Ø State nodes are OR nodes where some action must be chosen. Ø At the AND nodes, shown as circles, every outcome must be handled, as indicated by the arc linking the outgoing branches. Ø The solution found is shown in bold lines. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 9 Search with Nondeterministic Actions AND–OR search trees Figure gives a recursive, depth-first algorithm for AND–OR graph search. One key aspect of the algorithm is the way in which it deals with cycles, which often arise in nondeterministic problems (e.g., if an action sometimes has no effect or if an unintended effect can be corrected). Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 10 Search with Nondeterministic Actions AND–OR search trees Ø If the current state is identical to a state on the path from the root, then it returns with failure. Ø This doesn’t mean that there is no solution from the current state; it simply means that if there is a noncyclic solution, it must be reachable from the earlier incarnation of the current state, so the new incarnation can be discarded. Ø With this check, we ensure that the algorithm terminates in every finite state space, because every path must reach a goal, a dead end, or a repeated state. Ø Notice that the algorithm does not check whether the current state is a repetition of a state on some other path from the root, which is important for efficiency. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 11 Search with Nondeterministic Actions AND–OR search trees Ø AND–OR graphs can be explored either breadth-first or best-first. Ø The concept of a heuristic function must be modified to estimate the cost of a contingent solution rather than a sequence, but the notion of admissibility carries over and there is an analog of the A∗ algorithm for finding optimal solutions. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 12 Search with Nondeterministic Actions Try, try again Ø Consider a slippery vacuum world, which is identical to the ordinary (non-erratic) vacuum world except that movement actions sometimes fail, leaving the agent in the same location. Ø For example, moving Right in state 1 leads to the belief state {1,2}. Figure shows part of the search graph; clearly, there are no longer any acyclic solutions from state 1, and AND- OR-SEARCH would return with failure. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 13 Search with Nondeterministic Actions Try, try again There is, however, a cyclic solution, which is to keep trying Right until it works. We can express this with a new while construct: [Suck, while State = 5 do Right, Suck] or by adding a label to denote some portion of the plan and referring to that label later: [Suck, L1 : Right, if State=5 then L1 else Suck]. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 14 Search with Nondeterministic Actions Try, try again Ø When is a cyclic plan a solution? A minimum condition is that every leaf is a goal state and that a leaf is reachable from every point in the plan. Ø In addition to that, we need to consider the cause of the nondeterminism. If it is really the case that the vacuum robot’s drive mechanism works some of the time, but randomly and independently slips on other occasions, then the agent can be confident that if the action is repeated enough times, eventually it will work and the plan will succeed. Ø But if the nondeterminism is due to some unobserved fact about the robot or environment—perhaps a drive belt has snapped, and the robot will never move—then repeating the action will not help. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 15 Content ØLocal Search and Optimization Problems ØLocal Search in Continuous Spaces ØSearch with Nondeterministic Actions ØSearch in Partially Observable Environments ØOnline Search Agents and Unknown Environments Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 16 Search in Partially Observable Environments Searching with no observation Ø When the agent’s percepts provide no information at all, we have what is called a sensorless problem. Ø At first, you might think the sensorless agent has no hope of solving a problem if it has no idea what state it starts in, but sensorless solutions are surprisingly common and useful, primarily because they don’t rely on sensors working properly. Ø In manufacturing systems, for example, many methods have been developed for orienting parts correctly from an unknown initial position by using a sequence of actions with no sensing at all. Ø The sensorless plan saves time and money and avoids the risk of the infection worsening before the test results are available. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 17 Search in Partially Observable Environments Searching with no observation Ø Consider a sensorless version of the (deterministic) vacuum world. Ø Assume that the agent knows the geography of its world, but not its own location or the distribution of dirt. In that case, its initial belief state is {1,2,3,4,5,6,7,8}. Ø Now, if the agent moves Right it will be in one of the states {2,4,6,8}—the agent has gained information without perceiving anything! Ø After [Right, Suck] the agent will always end up in one of the states {4,8}. Finally, after [Right, Suck, Left, Suck] the agent is guaranteed to reach the goal state 7, no matter what the start state. Ø We say that the agent can coerce the world into state 7. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 18 Search in Partially Observable Environments Searching with no observation Ø The solution to a sensorless problem is a sequence of actions, not a conditional plan (because there is no perceiving). But we search in the space of belief states rather than physical states. In belief-state space, the problem is fully observable because the agent always knows its own belief state. Ø Furthermore, the solution (if any) for a sensorless problem is always a sequence of actions. This is because, as in the ordinary problems of previous lecture, the percepts received after each action are completely predictable. So there are no contingencies to plan for. This is true even if the environment is nondeterministic. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 19 Search in Partially Observable Environments Searching with no observation Ø The original problem, P, has components ActionsP, ResultP etc., and the belief- state problem has the following components: Ø States: The belief-state space contains every possible subset of the physical states. If P has N states, then the belief-state problem has 2N belief states, although many of those may be unreachable from the initial state. Ø Initial state: Typically, the belief state consisting of all states in P, although in some cases the agent will have more knowledge than this. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 20 Search in Partially Observable Environments Searching with no observation Ø The original problem, P, has components ActionsP, ResultP etc., and the belief- state problem has the following components: Ø Actions: This is slightly tricky. Suppose the agent is in belief state b={s1,s2}, but ACTIONSP(s1)≠ACTIONSP(s2); then the agent is unsure of which actions are legal. If we assume that illegal actions have no effect on the environment, then it is safe to take the union of all the actions in any of the physical states in the current belief state b: Ø 𝑨𝑪𝑻𝑰𝑶𝑵𝑺 𝒃 = ⋃𝒔∈𝒃 𝑨𝑪𝑻𝑰𝑶𝑵𝑺𝒑(𝒔) Ø If an illegal might lead to catastrophe, it is safer to allow only the intersection, that is, the set of actions legal in all the states. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 21 Search in Partially Observable Environments Searching with no observation Ø The original problem, P, has components ActionsP, ResultP etc., and the belief- state problem has the following components: Ø Transition model: For deterministic actions, the new belief state has one result state for each of the current possible states (although some result states may be the same): Ø With nondeterminism, the new belief state consists of all the possible results of applying the action to any of the states in the current belief state: Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 22 Search in Partially Observable Environments Searching with no observation Ø The original problem, P, has components ActionsP, ResultP etc., and the belief- state problem has the following components: Ø Goal test: The agent possibly achieves the goal if any state s in the belief state satisfies the goal test of the underlying problem, IS-GOALP(s). The agent necessarily achieves the goal if every state satisfies IS-GOALP(s). We aim to necessarily achieve the goal. Ø Action cost: This is also tricky. If the same action can have different costs in different states, then the cost of taking an action in a given belief state could be one of several values. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 23 Search in Partially Observable Environments Searching with no observation Figure shows (a) Predicting the next belief state for the sensorless vacuum world with the deterministic action, Right. (b) Prediction for the same belief state and action in the slippery version of the sensorless vacuum world. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 24 Search in Partially Observable Environments Searching with no observation Ø Another approach is to avoid the standard search algorithms, which treat belief states as black boxes just like any other problem state. Instead, we can look inside the belief states and develop incremental belief-state search algorithms that build up the solution one physical state at a time. Ø For example, in the sensorless vacuum world, the initial belief state is {1, 2, 3, 4, 5, 6, 7, 8}, and we have to find an action sequence that works in all 8 states. We can do this by first finding a solution that works for state 1; then we check if it works for state 2; if not, go back and find a different solution for state 1, and so on. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 25 Search in Partially Observable Environments Searching with no observation Ø The main advantage of the incremental approach is that it is typically able to detect failure quickly—when a belief state is unsolvable, it is usually the case that a small subset of the belief state, consisting of the first few states examined, is also unsolvable. Ø In some cases, this leads to a speedup proportional to the size of the belief states, which may themselves be as large as the physical state space itself. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 26 Search in Partially Observable Environments Searching in partially observable environments Ø For a partially observable problem, the problem specification will specify a PERCEPT(s) function that returns the percept received by the agent in a given state. Ø If sensing is nondeterministic, then we can use a PERCEPTS function that returns a set of possible percepts. Ø For fully observable problems, PERCEPT(s)=s for every state s, and for sensorless problems PERCEPT(s)=null. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 27 Search in Partially Observable Environments Searching in partially observable environments Ø Consider a local-sensing vacuum world, in which the agent has a position sensor that yields the percept L in the left square, and R in the right square, and a dirt sensor that yields Dirty when the current square is dirty and Clean when it is clean. Ø Thus, the PERCEPT in state 1 is [L, Dirty]. With partial observability, it will usually be the case that several states produce the same percept; state 3 will also produce [L, Dirty]. Hence, given this initial percept, the initial belief state will be {1,3}. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 28 Search in Partially Observable Environments Searching in partially observable environments Two examples of transitions in local-sensing vacuum worlds. (a) In the deterministic world, Right is applied in the initial belief state, resulting in a new predicted belief state with two possible physical states; for those states, the possible percepts are [R, Dirty] and [R, Clean], leading to two belief states, each of which is a singleton. (b) In the slippery world, Right is applied in the initial belief state, giving a new belief state with four physical states; for those states, the possible percepts are [L, Dirty], [R, Dirty], and [R, Clean], leading to three belief states as shown. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 29 Search in Partially Observable Environments Solving partially observable problems Figure shows part of the search tree for the local-sensing vacuum world, assuming an initial percept [A, Dirty]. The solution is the conditional plan [Suck, Right, If B state={6} then Suck else []]. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 30 Search in Partially Observable Environments Solving partially observable problems Ø Notice that, because we supplied a belief- state problem to the AND–OR search algorithm, it returned a conditional plan that tests the belief state rather than the actual state. Ø This is as it should be: in a partially observable environment the agent won’t know the actual state. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 31 Content ØLocal Search and Optimization Problems ØLocal Search in Continuous Spaces ØSearch with Nondeterministic Actions ØSearch in Partially Observable Environments ØOnline Search Agents and Unknown Environments Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 32 Online Search Agents and Unknown Environments Ø So far we have concentrated on agents that use offline search algorithms. They compute a complete solution before taking their first action. Ø In contrast, an online search agent interleaves computation and action: first it takes an action, then it observes the environment and computes the next action. Ø Online search is a good idea in dynamic or semi-dynamic environments, where there is a penalty for sitting around and computing too long. Ø Online search is also helpful in nondeterministic domains because it allows the agent to focus its computational efforts on the contingencies that actually arise rather than those that might happen but probably won’t. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 33 Online Search Agents and Unknown Environments Ø A canonical example of online search is the mapping problem: a robot is placed in an unknown building and must explore to build a map that can later be used for getting from A to B. Ø Methods for escaping from labyrinths—required knowledge for aspiring heroes of antiquity—are also examples of online search algorithms. Ø Consider a newborn baby: it has many possible actions but knows the outcomes of none of them, and it has experienced only a few of the possible states that it can reach. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 34 Online Search Agents and Unknown Environments Online search problems Ø An online search problem is solved by interleaving computation, sensing, and acting. We’ll start by assuming a deterministic and fully observable environment and stipulate that the agent knows only the following: ACTIONS(s), the legal actions in state s; c(s, a, s′), the cost of applying action a in state s to arrive at state s′. Note that this cannot be used until the agent knows that s′ is the outcome. IS-GOAL(s), the goal test. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 35 Online Search Agents and Unknown Environments Online search problems Ø Note in particular that the agent cannot determine RESULT(s, a) except by actually being in s and doing a. Ø For example, in the maze problem shown in Figure, the agent does not know that going Up from (1,1) leads to (1,2); nor, having done that, does it know that going Down will take it back to (1,1). Ø This degree of ignorance can be reduced in some applications—for example, a robot explorer might know how its movement actions work and be ignorant only of the locations of obstacles. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 36 Online Search Agents and Unknown Environments Online search problems Ø Finally, the agent might have access to an admissible heuristic function h(s) that estimates the distance from the current state to a goal state. Ø For example, in Figure, the agent might know the location of the goal and be able to use the Manhattan- distance heuristic. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 37 Online Search Agents and Unknown Environments Online search problems Ø Typically, the agent’s objective is to reach a goal state while minimizing cost. (Another possible objective is simply to explore the entire environment.) Ø The cost is the total path cost that the agent incurs as it travels. It is common to compare this cost with the path cost the agent would incur if it knew the search space in advance—that is, the optimal path in the known environment. Ø In the language of online algorithms, this comparison is called the competitive ratio. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 38 Online Search Agents and Unknown Environments Online search problems Ø Online explorers are vulnerable to dead ends: states from which no goal state is reachable. Ø If the agent doesn’t know what each action does, it might execute the “jump into bottomless pit” action, and thus never reach the goal. Ø In general, no algorithm can avoid dead ends in all state spaces. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 39 Online Search Agents and Unknown Environments Online search problems Ø Consider the two dead-end state spaces in Figure. Ø An online search algorithm that has visited states S and A cannot tell if it is in the top state or the bottom one; the two look identical based on what the agent has seen. Therefore, there is no way it could know how to choose the correct action in both state spaces. Ø This is an example of an adversary argument. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 40 Online Search Agents and Unknown Environments Online search problems Ø Dead ends are a real difficulty for robot exploration—staircases, ramps, cliffs, one-way streets, and even natural terrain all present states from which some actions are irreversible— there is no way to return to the previous state. Ø The exploration algorithm we will present is only guaranteed to work in state spaces that are safely explorable—that is, some goal state is reachable from every reachable state. Ø State spaces with only reversible actions, such as mazes and 8-puzzles, are clearly safely explorable (if they have any solution at all). Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 41 Online Search Agents and Unknown Environments Online search agents Ø An online algorithm can discover successors only for a state that it physically occupies. Ø To avoid traveling all the way to a distant state to expand the next node, it seems better to expand nodes in a local order. Ø Depth-first search has exactly this property because the next node expanded is a child of the previous node expanded. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 42 Online Search Agents and Unknown Environments Online search agents An online search agent that uses depth-first exploration. The agent can safely explore only in state spaces in which every action can be “undone” by some other action. Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 43 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 44 Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 45 Adversarial Search and Games Part – 1 Dr. Oybek Eraliev, Department of Computer Engineering Inha University In Tashkent. Email: [email protected] Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 1 Content ØGame Theory ØOptimal Decision in Games ØHeuristic Alpha-Beta Tree Search ØMonte Carlo Tree search ØStochastic Games ØLimitations of Game Search Algorithms Dr. Oybek Eraliyev Class: Artificial Intelligence SOC4040 2 Game Theory Ø There are at least three stances we can take towards multi-agent environments. Ø The first stance, appropriate when there are a very large number of agents, is to consider them in the aggregate as an economy, allowing us to do things like predict that increasing demand will cause prices to rise, without having to predict the action of any individual agent. Ø Sec

Use Quizgecko on...
Browser
Browser