Artificial Intelligence Lecture Notes: Mansoura University PDF
Document Details
Uploaded by HonorableFaith7685
Mansoura University
null
Amir El-Ghamry
Tags
Summary
These lecture notes from Mansoura University detail the fundamentals of intelligent agents in artificial intelligence. They cover various agent types, agent-environment interactions, and the nature of various environments. The notes include examples and diagrams to illustrate key concepts.
Full Transcript
Faculty of computer and information sciences Mansoura University Information System Department Intelligent Information System Artificial Intelligence Lecture 2 and 3: Intelligent Agent...
Faculty of computer and information sciences Mansoura University Information System Department Intelligent Information System Artificial Intelligence Lecture 2 and 3: Intelligent Agent Lecture Notes Dr. Reham Reda 3 Mostafa Mansoura University Faculty of Computers and Information Dept. of Information System By: Amir El-Ghamry [email protected] Faculty of computer and information sciences Mansoura University Information System Department Outlines Agents Agents and environments Good Behavior: The Concept of Rationality The nature of environment The structure of Agents Types of agents program Faculty of computer and information sciences Mansoura University Information System Department The Nature of Environments To design a rational agent, we must specify the task environment: Specifying the task environment (PEAS) — Performance measure, how agent be assessed? — Environment, what elements exists around agent? — Actuators, how agent change the environment? — Sensors, how agent sense the environment? In designing an agent, the first step must always be to specify the task environment (PEAS) as fully as possible Faculty of computer and information sciences Mansoura University Information System Department Another examples: Agent Type Performance Environment Actuators Sensors Satellite Correct image Downlink from Display Color pixel array image system categorization satellite categorization of scene Part-picking Percentage of Conveyor belt Jointed arm and Camera, joint robot parts in correct with parts, bins hand angle sensors bins Interactive Maximize Set of students, Display exercises, Keyboard entry English tutor student’s score on testing agency suggestions, test corrections Faculty of computer and information sciences Mansoura University Information System Department Environments Types There are different types of environment: — Fully observable vs. partially observable — Deterministic vs. stochastic — Episodic vs. sequential — Static vs. dynamic — Discrete vs. continuous — Single agent vs. multiagent Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) Fully observable vs. partially observable: — An environment is fully observable if an agent's sensors give it access to the complete state of the environment at each point in time. — Fully observable environments are convenient, because the agent need not maintain any internal state to keep track of the world — An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data — Examples: vacuum cleaner with local dirt sensor, taxi driver Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) Deterministic vs. stochastic: — The environment is deterministic if the next state of the environment is completely determined by the current state and the action executed by the agent. — In principle, an agent need not worry about uncertainty in a fully observable, deterministic environment — If the environment is partially observable then it could appear to be stochastic — Examples: chess is deterministic while taxi driver is not — If the environment is deterministic except for the actions of other agents, then the environment is strategic Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) Episodic vs. sequential: — In episodic environments, the agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. Examples: Mail sorting robot — In sequential environments, the current decision could affect all future decisions Examples: chess and taxi driver Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) Static vs. dynamic: — The environment is unchanged while an agent is deliberating. — Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on the action or need it worry about the passage of time — Dynamic environments continuously ask the agent what it wants to do — The environment is semi-dynamic if the environment itself does not change with the passage of time but the agent's performance score does — Examples: taxi driving is dynamic, chess when played with a clock is semi-dynamic, crossword puzzles are static Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) Static vs. dynamic: static dynamic semi-dynamic Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) Discrete vs. continuous: — A limited number of distinct, clearly defined states, percepts and actions. — Examples: Chess has finite number of discrete states, and has discrete set of percepts and actions. Taxi driving has continuous states, and actions (infinite: speed and location are continuous values) Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) Single agent vs. multiagent: — An agent operating by itself in an environment is single agent — Examples: Crossword is a single agent while chess is two-agents — Examples: chess is a competitive multiagent environment while taxi driving is a partially cooperative multiagent environment Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) The environment type largely determines the agent design The simplest environment is Fully observable, deterministic, episodic, static, discrete and single-agent. The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) Solitaire Chess with a Taxi Vacuum clock Cleaner Observable? No Yes No Yes Deterministic? Yes Yes No Yes Episodic? No No No Yes Static? Yes Semi No Yes Discrete? Yes Yes No Yes Single-agent? Yes No No Yes Faculty of computer and information sciences Mansoura University Information System Department Environments Types (cont’d) Faculty of computer and information sciences Mansoura University Information System Department Outlines Agents Agents and environments Good Behavior: The Concept of Rationality The nature of environment The structure of Agents Types of agents program Faculty of computer and information sciences Mansoura University Information System Department The Structure of Agent 𝐴𝑔𝑒𝑛𝑡 = 𝑎𝑔𝑒𝑛𝑡 𝑝𝑟𝑜𝑔𝑟𝑎𝑚 + 𝑎𝑟𝑐ℎ𝑖𝑡𝑒𝑐𝑡𝑢𝑟𝑒 An agent is completely specified by the agent program that implement agent function of mapping percept sequences to actions Architecture: some sort of computing device with physical sensors and actuators (PC, robotic car) – should be appropriate: walk action requires legs Faculty of computer and information sciences Mansoura University Information System Department The Structure of Agent (cont’d) Obviously, the program we choose has to be one that is appropriate for the architecture. If the program is going to recommend actions like Walk, the architecture had better have legs. The architecture might be just an ordinary PC, or it might be a robotic car with several onboard computers, cameras, and other sensors. In general, the architecture — makes the percepts from the sensors available to the program, — runs the program, — and feeds the program's action choices to the actuators as they are generated. Faculty of computer and information sciences Mansoura University Information System Department Agent program All the agents the same skeleton: they take the current percept as input from the sensors and return an action to the actuator. Faculty of computer and information sciences Mansoura University Information System Department Outlines Agents Agents and environments Good Behavior: The Concept of Rationality The nature of environment The structure of Agents Types of agents program Faculty of computer and information sciences Mansoura University Information System Department Types of Agents There are four basic kinds of agent program that embody the principles underlying almost all intelligent systems: — Simple reflex agents; — Model-based reflex agents; — Goal-based agents; and — Utility-based agents. We then explain in general terms how to convert all these into learning agents. Faculty of computer and information sciences Mansoura University Information System Department Simple reflex agents The simplest kind of agent is the simple reflex agent. These agents select actions on the basis of the current percept, ignoring the rest of the percept history. For example, the vacuum agent is a simple reflex agent, because its decision is based only on the current location and on whether that contains dirt. Faculty of computer and information sciences Mansoura University Information System Department Simple reflex agents (cont’d) In simple reflex agents, the agent makes a connection from percepts to action through condition-action rules, written as: if car-in-front-is-braking then initiate-braking. Following figure gives the structure of this general program in schematic form, showing how the condition-action rules allow the agent to make the connection from percept to action. Faculty of computer and information sciences Mansoura University Information System Department Simple reflex agents (cont’d) The agent program, which is also very simple, is shown in figure 2.9. The INTERPRET-INPUT function generates an abstracted description of the current state from the percept, and the RULE-MATCH function returns the first rule in the set of rules that matches the given state description. Simple reflex agents have the admirable property of being simple, but they turn out to be of very limited intelligence. The agent in will work only if the correct decision can be made on the basis of only the current percept-that is, only if the environments fully observable. Faculty of computer and information sciences Mansoura University Information System Department Model-based reflex agents The most effective way to handle partial observability is for the agent to keep track of the part of the world it can't see now. That is, the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the agent program. — First, we need some information about how the world evolves independently of the agent. — Second, we need some information about how the agent's own actions affect the world. This knowledge about "how the world works is called a model of the world. An agent that uses such a model is called a model-based agent. Faculty of computer and information sciences Mansoura University Information System Department Model-based reflex agents (cont’d) The structure of the reflex agent with internal state, showing how the current percept is combined with the old internal state to generate the updated description of the current state. Faculty of computer and information sciences Mansoura University Information System Department Model-based reflex agents (cont’d) The agent program is shown following figure. The interesting part is the function UPDATE-STATE, which is responsible for creating the new internal state description. As well as interpreting the new percept in the light of existing knowledge about the state, it uses information about how the world evolves to keep track of the unseen parts of the world, and also must know about what the agent's actions do to the state of the world. Faculty of computer and information sciences Mansoura University Information System Department Goal-based agents Knowing about the current state of the environment is not always enough to decide what to do (e.g. decision at a road junction) The agent needs some sort of goal information that describes situations that are desirable (e.g. being at the passenger's destination). The agent program can combine this with information about the results of possible actions in order to choose actions that achieve the goal Notice that decision making of this kind is fundamentally different from the condition-action rules of reflex agent, in that it involves consideration of the future-both "What will happen if I do such-and- such?" and "Will that make me happy?" Faculty of computer and information sciences Mansoura University Information System Department Goal-based agents Following Figure shows the goal-based agent's structure Faculty of computer and information sciences Mansoura University Information System Department Goal-based agents (cont’d) Goal-based agents vs reflex-based agents — Although goal-based agents appears less efficient, it is more flexible because the knowledge that supports its decision is represented explicitly and can be modified. — On the other hand, for the reflex-agent, we would have to rewrite many condition-action rules — The goal based agent's behavior can easily be changed — The reflex agent's rules must be changed for a new situation Faculty of computer and information sciences Mansoura University Information System Department Utility-based agents Goals alone are not really enough to generate high quality behavior in most environments – they just provide a binary distinction between happy and unhappy states A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent if they could be achieved Happy – Utility (the quality of being useful) A utility function maps a state onto a real number which describes the associated degree of happiness Faculty of computer and information sciences Mansoura University Information System Department Utility-based agents (cont’d) The utility-based agent structure appears in following figure Faculty of computer and information sciences Mansoura University Information System Department Learning agents Learning has another advantage, it allows the agent to operate in initially unknown environments and to become more competent than its initial knowledge alone might allow. Learning in intelligent agents can be summarized as a process of modification of each component of the agent to bring the components into closer agreement with the available feedback information, thereby improving the overall performance of the agent. Faculty of computer and information sciences Mansoura University Information System Department Learning agents (cont’d) A learning agent can be divided into four conceptual components — Learning element – responsible for making improvements — Performance element – responsible for selecting external actions (it is what we had defined as the entire agent before) — Learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future. — Problem generator is responsible for suggesting actions that will lead to a new and informative experiences Faculty of computer and information sciences Mansoura University Information System Department Readings Readings — Artificial Intelligence, A Modern Approach, by Stuart Russel and Peter Norvig (2nd Edition 2009) Chapter 2 Thank You