Intelligent Agents PDF
Document Details
Uploaded by Deleted User
Tags
Related
- Lecture 7: Intelligent Agents as a Framework for AI (COM1005 2024-25) PDF
- BMEE407L - Artificial Intelligence Module 1 PDF
- Artificial Intelligence CSB2104 Lecture Notes PDF
- CSE1100 Lecture 3C - Artificial Intelligence (AI) PDF
- Artificial Intelligence Introduction PDF
- Unit-1 Introduction to Artificial Intelligent [22AI002] PDF
Summary
This document provides an overview of intelligent agents, including their properties and different types of agent programs. Topics covered include agents and environments, performance measures, rationality, and types of agents. The document is likely part of a course or text related to AI concepts.
Full Transcript
INTELLIGENT AGENTS Agents and Environments An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. We use the term percept to refer to the content an agent’s sensors are perceiving. An Per...
INTELLIGENT AGENTS Agents and Environments An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. We use the term percept to refer to the content an agent’s sensors are perceiving. An Percept sequence agent’s percept sequence is the complete history of everything the agent has ever perceived. In general, an agent’s choice of action at any given instant can depend on its built-in knowledge and on the entire percept sequence observed to date, but not on anything it hasn’t perceived. Mathematically speaking, we say that an agent’s behavior is described by the agent function that maps any given percept sequence to an action. Internally, the agent function for an artificial agent will be implemented by an agent program. The agent function is an abstract mathematical description. The agent program is a concrete implementation, running within some physical system. Simple Example: The vacuum- cleaner world consists of a robotic vacuum-cleaning agent in a world consisting of squares that can be either dirty or clean. The vacuum agent perceives which square it is in and whether there is dirt in the square. The agent starts in square A. The available actions are to move to the right, move to the left, suck up the dirt, or do nothing. One very simple agent function is the following: if the current square is dirty, then suck; otherwise, move to the other square. Partial tabulation of a simple agent function for the vacuum-cleaner-world shown. The agent cleans the current square if it is dirty, otherwise it moves to the other square. Note that the table is of unbounded size unless there is a restriction on the length of possible percept sequences. Rationality What is rational at any given time depends on four things: The performance measure that defines the criterion of success. The agent’s prior knowledge of the environment. The actions that the agent can perform. The agent’s percept sequence to date. Definition of a rational agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Omniscience, learning, and autonomy An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality. Rationality is not the same as perfection. Rationality maximizes expected performance, while perfection maximizes actual performance. A rational agent requires to gather information and also to learn as much as possible from what it perceives. A rational agent should be autonomous—it should learn what it can to compensate for partial or incorrect prior knowledge. Specifying the task environment The task environment is called: PEAS this the (Performance, Environment, Actuators, Sensors) description. PEAS description of the task environment for an automated taxi driver. Properties of task environments 1. Fully observable vs. partially observable: If an agent’s sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action; relevance, in turn, depends on the performance measure. An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data. 2. Single-agent vs. multiagent: The distinction between single-agent and multiagent environments may seem simple enough. For example, an agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two agent environment. 3. Thus, chess is a competitive multiagent environment. On the other hand, in the taxi- driving environment, Competitive avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperative multiagent environment. 4. Deterministic vs. nondeterministic. If the next state of the environment is completely determined by the current state and the action executed by the agent(s), then we say the environment is deterministic. otherwise, it is nondeterministic. Fully observable environment is deterministic, partially observable environment is nondeterministic. Nondeterministic environment is called stochastic it explicitly deals with probabilities. 5. Episodic vs. sequential: In an episodic task environment, the agent’s experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. the next episode does not depend on the actions taken in previous episodes. In sequential environments the current decision could affect all future decisions. Episodic environments are much simpler than sequential environments because the agent does not need to think ahead. 6. Static vs. dynamic: If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise, it is static. Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time. 7. Discrete vs. continuous: The discrete/continuous distinction applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. 8. Known vs. unknown: In a known environment, the outcomes (or outcome probabilities if the environment is nondeterministic) for all actions are given. If the environment is unknown, the agent will have to learn how it works in order to make good decisions. The Structure of Agents The job of AI is to design an agent program that implements the agent function (the mapping from percepts to actions). This program will run on some sort of computing device with physical sensors and actuators—we call this the agent architecture: agent = architecture + program Agent programs The agent program takes the current percept as input from the sensors and return an action to the actuators. Notice the difference between the agent program, which takes the current percept as input, and the agent function, which may depend on the entire percept history. The TABLE-DRIVEN-AGENT program is invoked for each new percept and returns an action each time. It retains the complete percept sequence in memory. The agent program for a simple reflex agent in the two-location vacuum environment. The key challenge for AI is to find out how to write programs that, to the extent possible, produce rational behavior from a smallish program rather than from a vast table. Four basic kinds of agent programs Simple reflex agents; Model-based reflex agents; Goal-based agents; and Utility-based agents. 1. Simple reflex agents These agents select actions on the basis Simple reflex agent of the current percept, ignoring the rest of the percept history. For example, the vacuum agent is a simple reflex agent, because its decision is based only on the current location and on whether that location contains dirt. Condition–action rule written as: if car-in-front-is-braking then initiate-braking. Simple reflex agent. It acts according to a rule whose condition matches the current state, as defined by the percept. The INTERPRET-INPUT function generates an abstracted description of the current state from the percept, and the RULE-MATCH function returns the first rule in the set of rules that matches the given state description. 2. Model-based reflex agents The agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the agent program in some form. First, we need some information about how the world changes over time, which can be divided roughly into two parts: the effects of the agent’s actions and how the world evolves independently of the agent. This knowledge about “how the world works”—whether implemented in simple Boolean circuits or in complete scientific theories—is called a transition model of the world. Second, we need some information about how the state of the world is reflected in the agent’s percepts. This kind of knowledge is called a sensor model. Together, the transition model and sensor model allow an agent to keep track of the state of the world—to the extent possible given the limitations of the agent’s sensors. An agent that uses such models is called a model-based agent. A model-based reflex agent. It keeps track of the current state of the world, using an internal model. It then chooses an action in the same way as the reflex agent. The interesting part is the function UPDATE-STATE, which is responsible for creating the new internal state description. 3. Goal-based agents A model-based, goal-based agent. It keeps track of the world state as well as a set of goals it is trying to achieve, and chooses an action that will (eventually) lead to the achievement of its goals. 4. Utility-based agents A model-based, utility-based agent. It uses a model of the world, along with a utility function that measures its preferences among states of the world. Then it chooses the action that leads to the best expected utility, where expected utility is computed by averaging over all possible outcome states, weighted by the probability of the outcome. Learning agents Any type of agent (model-based, goal- based, utility-based, etc.) can be built as a learning agent (or not). Learning has another advantage, as we noted earlier: it allows the agent to operate in initially unknown environments and to become more competent than its initial knowledge alone might allow. The “performance element” box represents what we have previously considered to be the whole agent program which is responsible for element selecting external actions. The “learning element” box gets to modify that program to improve its performance. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions. The learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future. The design of the learning element depends very much on the design of the performance element. The critic tells the learning element how well the agent is doing with respect to a fixed performance standard. The critic is necessary because the percepts themselves provide no indication of the agent’s success. Problem generator is responsible for suggesting actions that will lead to new and informative experiences. Learning in intelligent agents can be summarized as a process of modification of each component of the agent to bring the components into closer agreement with the available feedback information, thereby improving the overall performance of the agent. How the components of agent programs work In an atomic representation each state of the world is indivisible—it has no internal structure. A factored representation splits up each state into a fixed set of variables or attributes, each of which can have a value. Structured representations underlie relational databases much of what humans express in natural language concerns objects and their relationships. Summary An agent is something that perceives and acts in an environment. The agent function for an agent specifies the action taken by the agent in response to any percept sequence. The performance measure evaluates the behavior of the agent in an environment. A rational agent acts so as to maximize the expected value of the performance measure, given the percept sequence it has seen so far. A task environment specification includes the performance measure, the external environment, the actuators, and the sensors. Task environments can be: fully or partially observable, single-agent or multiagent, deterministic or nondeterministic, episodic or sequential, static or dynamic, discrete or continuous, and known or unknown. The agent program implements the agent function. Simple reflex agents respond directly to percepts, whereas model-based reflex agents maintain internal state to track aspects of the world that are not evident in the current percept. Goal-based agents act to achieve their goals, and utility-based agents try to maximize their own expected “happiness.” All agents can improve their performance through learning.