Intelligent Agents Lecture Notes PDF

Summary

These lecture notes provide an overview of intelligent agents, covering topics such as motivation, objectives, agent types, agents and environments, rationality, agent structure, and important concepts and terms. The document also includes examples of agents and their environment.

Full Transcript

Intelligent Intelligent Agents Agents Agents 1 Chapter Chapter Overview Overview Intelligent Intelligent Agents Agents  Motivation...

Intelligent Intelligent Agents Agents Agents 1 Chapter Chapter Overview Overview Intelligent Intelligent Agents Agents  Motivation  Agent Types  Objectives  Simple reflex agent  Introduction  Model-based reflex agent  Goal-based agent  Agents and Environments  Utility-based agent  Rationality  Learning agent  Agent Structure  Important Concepts and Terms  Chapter Summary Agents 2 Newell Newell Simon Simon model model of of human human information information processing processing Agents 3 Motivation Motivation  agents are used to provide a consistent viewpoint on various topics in the field AI  agents require essential skills to perform tasks that require intelligence  intelligent agents use methods and techniques from the field of AI Agents 4 Objectives Objectives  introduce the essential concepts of intelligent agents  define some basic requirements for the behavior and structure of agents  establish mechanisms for agents to interact with their environment Agents 5 What What is is an an Agent? Agent?  ingeneral, an entity that interacts with its environment  perception through sensors  actions through effectors or actuators Agents 6 Agents 7 Examples Examples of of Agents Agents  human agent  eyes, ears, skin, taste buds, etc. for sensors  hands, fingers, legs, mouth, etc. for actuators  powered by muscles  robot  camera, infrared, bumper, etc. for sensors  wheels, lights, speakers, etc. for actuators  often powered by motors  software agent  functions as sensors  information provided as input to functions in the form of encoded bit strings or symbols  functions as actuators  results deliver the output Agents 8 Agents Agents and and Environments Environments  an agent perceives its environment through sensors  the complete set of inputs at a given time is called a percept  the current percept, or a sequence of percepts may influence the actions of an agent  it can change the environment through actuators  an operation involving an actuator is called an action  actions can be grouped into action sequences Agents 9 Agents Agents and and Their Their Actions Actions a rational agent does “the right thing”  the action that leads to the best outcome under the given circumstances  anagent function maps percept sequences to actions  abstract mathematical description  anagent program is a concrete implementation of the respective function  it runs on a specific agent architecture (“platform”)  problems:  what is “ the right thing”  how do you measure the “best outcome” Agents 10 Performance Performance of of Agents Agents  criteriafor measuring the outcome and the expenses of the agent  often subjective, but should be objective  task dependent  time may be important Agents 11 Performance Performance Evaluation Evaluation Examples Examples  vacuum agent  number of tiles cleaned during a certain period  based on the agent’s report, or validated by an objective authority  doesn’t consider expenses of the agent, side effects  energy, noise, loss of useful objects, damaged furniture, scratched floor  might lead to unwanted activities  agent re-cleans clean tiles, covers only part of the room, drops dirt on tiles to have more tiles to clean, etc. Agents 12 Rational Rational Agent Agent  selects the action that is expected to maximize its performance  based on a performance measure  depends on the percept sequence, background knowledge, and feasible actions Agents 13 Rational Rational Agent Agent Considerations Considerations  performance measure for the successful completion of a task  complete perceptual history (percept sequence)  background knowledge  especially about the environment  dimensions, structure, basic “laws”  task, user, other agents  feasible actions  capabilities of the agent Agents 14 Omniscience Omniscience a rational agent is not omniscient (well informed)  it doesn’t know the actual outcome of its actions  it may not know certain aspects of its environment  rationality takes into account the limitations of the agent  percept sequence, background knowledge, feasible actions  it deals with the expected outcome of actions Agents 15 Environments Environments  determine to a large degree the interaction between the “outside world” and the agent  the “outside world” is not necessarily the “real world” as we perceive it  inmany cases, environments are implemented within computers  they may or may not have a close correspondence to the “real world” Agents 16 Environment Environment Properties Properties  fully observable vs. partially observable  sensors capture all relevant information from the environment  deterministic vs. stochastic (non-deterministic)  changes in the environment are predictable  episodic vs. sequential (non-episodic)  independent perceiving-acting episodes  static vs. dynamic  no changes while the agent is “thinking”  discrete vs. continuous  limited number of distinct percepts/actions  single vs. multiple agents  interaction and collaboration among agents  competitive, cooperative Agents 17 Environment Environment Programs Programs  environment simulators for experiments with agents  gives a percept to an agent  receives an action  updates the environment  often divided into environment classes for related tasks or types of agents  frequently provides mechanisms for measuring the performance of agents Agents 18 From From Percepts Percepts to to Actions Actions  if an agent only reacts to its percepts, a table can describe the mapping from percept sequences to actions  insteadof a table, a simple function may also be used  can be conveniently used to describe simple agents that solve well-defined problems in a well-defined environment  e.g. calculation of mathematical functions Agents 19 PEAS PEAS Description Description of of Task Task Environments Environments used for high-level characterization of agents Performance used to evaluate how well an agent Measures solves the task at hand Environment surroundings beyond the control of the agent Actuators determine the actions the agent can perform Sensors provide information about the current state of the environment Agents 20 Exercise: Exercise: Vac-cleaner Vac-cleaner Peas Peas Description Description  usethe PEAS template to determine important aspects for a Vac-cleaner agent Agents 21 PEAS PEAS Description Description Template Template used for high-level characterization of agents Performance How well does the agent solve the task at Measures hand? How is this measured? Environment Important aspects of the surroundings beyond the control of the agent: Actuators Determine the actions the agent can perform. Sensors Provide information about the current state of the environment. Agents 22 Agent Agent Programs Programs  the emphasis in this course is on programs that specify the agent’s behavior through mappings from percepts to actions  agents receive one percept at a time  they may or may not keep track of the percept sequence  performance evaluation is often done by an outside authority, not the agent  more objective, less complicated  can be integrated with the environment program Agents 26 Skeleton Skeleton Agent Agent Program Program  basic framework for an agent program function SKELETON-AGENT(percept) returns action static: memory memory := UPDATE-MEMORY(memory, percept) action := CHOOSE-BEST-ACTION(memory) memory := UPDATE-MEMORY(memory, action) return action Agents 27 Look Look itit up! up!  simpleway to specify a mapping from percepts to actions  tables may become very large  all work done by the designer  no autonomy, all actions are predetermined  learning might take a very long time Agents 28 Table Table Agent Agent Program Program  agent program based on table lookup function TABLE-DRIVEN-AGENT(percept) returns action static: percepts // initially empty sequence* table // indexed by percept sequences // initially fully specified append percept to the end of percepts action:= LOOKUP(percepts, table) return action * Note:the storage of percepts requires writeable memory Agents 29 Agent Agent Program Program Types Types  different ways of achieving the mapping from percepts to actions  different levels of complexity  simple reflex agents  agents that keep track of the world  goal-based agents  utility-based agents  learning agents Agents 30 Simple Simple Reflex Reflex Agent Agent  instead of specifying individual mappings in an explicit table, common input-output associations are recorded  requires processing of percepts to achieve some abstraction  frequent method of specification is through condition-action rules  if percept then action  similar to innate reflexes or learned responses in humans  efficient implementation, but limited power  environment must be fully observable  easily runs into infinite loops Agents 31 Reflex Reflex Agent Agent Diagram Diagram Sensors Environment What the world is like now Condition-action rules What should I do now Agent Actuators Agents 32 Reflex Reflex Agent Agent Diagram Diagram 22 Sensors What the world is like now Condition-action rules What should I do now Agent Actuators Environment Agents 33 Reflex Reflex Agent Agent Program Program  application of simple rules to situations function SIMPLE-REFLEX-AGENT(percept) returns action static: rules //set of condition-action rules condition := INTERPRET-INPUT(percept) rule := RULE-MATCH(condition, rules) action := RULE-ACTION(rule) return action Agents 34 Exercise: Exercise: VacBot VacBot Reflex Reflex Agent Agent  specify a core set of condition-action rules for a VacBot agent Agents 35 Model-Based Model-Based Reflex Reflex Agent Agent  aninternal state maintains important information from previous percepts  sensors only provide a partial picture of the environment  helps with some partially observable environments  the internal states reflects the agent’s knowledge about the world  this knowledge is called a model  may contain information about changes in the world  caused by actions of the action  independent of the agent’s behavior Agents 36 Model-Based Model-Based Reflex Reflex Agent Agent Diagram Diagram Sensors State What the world is like now How the world evolves What my actions do Condition-action rules What should I do now Agent Actuators Environment Agents 37 Model-Based Model-Based Reflex Reflex Agent Agent Program Program  application of simple rules to situations function REFLEX-AGENT-WITH-STATE(percept) returns action static: rules //set of condition-action rules state //description of the current world state action //most recent action, initially none state := UPDATE-STATE(state, action, percept) rule := RULE-MATCH(state, rules) action := RULE-ACTION[rule] return action Agents 38 Goal-Based Goal-Based Agent Agent  the agent tries to reach a desirable state, the goal  may be provided from the outside (user, designer, environment), or inherent to the agent itself  results of possible actions are considered with respect to the goal  easy when the results can be related to the goal after each action  in general, it can be difficult to attribute goal satisfaction results to individual actions  may require consideration of the future  what-if scenarios  search, reasoning or planning  very flexible, but not very efficient Agents 39 Goal-Based Goal-Based Agent Agent Diagram Diagram Sensors State What the world is like now How the world evolves What happens if I do an action What my actions do Goals What should I do now Agent Actuators Environment Agents 40 Utility-Based Utility-Based Agent Agent  more sophisticated distinction between different world states a utility function maps states onto a real number  may be interpreted as “degree of happiness”  permits rational actions for more complex tasks  resolution of conflicts between goals (tradeoff)  multiple goals (likelihood of success, importance)  a utility function is necessary for rational behavior, but sometimes it is not made explicit Agents 41 Utility-Based Utility-Based Agent Agent Diagram Diagram Sensors State What the world is like now How the world evolves What happens if I do an action What my actions do How happy will I be then Utility What should I do now Agent Actuators Environment Agents 42 Learning Learning Agent Agent  performance element  selectsactions based on percepts, internal state, background knowledge  can be one of the previously described agents  learning element  identifies improvements  critic  providesfeedback about the performance of the agent  can be external; sometimes part of the environment  problem generator  suggests actions  required for novel solutions (creativity Agents 43 Learning Learning Agent Agent Diagram Diagram Performance Standard Sensors Critic State What the world is like now Learning How the world evolves What happens if I do an action Element What my actions do How happy will I be then Utility What should I do now Problem Generator Actuators Agent Environment Agents 44 Important Important Concepts Concepts and and Terms Terms  action  observable environment  actuator  omniscient agent  agent  PEAS description  agent program  percept  percept sequence  architecture  performance measure  autonomous agent  rational agent  continuous environment  reflex agent  deterministic environment  robot  discrete environment  sensor  episodic environment  sequential environment  goal  software agent  intelligent agent  state  knowledge representation  static environment  mapping  sticastuc environment  multi-agent environment  utility Agents 45 Chapter Chapter Summary Summary  agents perceive and act in an environment  ideal agents maximize their performance measure  autonomous agents act independently  basic agent types  simple reflex  reflex with state  goal-based  utility-based  learning  some environments may make life harder for agents  inaccessible, non-deterministic, non-episodic, dynamic, continuous Agents 46

Use Quizgecko on...
Browser
Browser