Introduction to Artificial Intelligence Lecture 2 PDF

Summary

These are lecture notes on Artificial Intelligence and intelligent agents. Topics covered include agents, environments, rationality, agent programs, and task environments. Examples of rational choice are given.

Full Transcript

Introduction to Artificial Intelligence Intelligent Agents Lecture 02 Dr. Samia 1 Prof. Dr. Mohammed Elmogy Outline Introduction to Artificial Intelligence Agents and Environments Good Behavior: The Concept...

Introduction to Artificial Intelligence Intelligent Agents Lecture 02 Dr. Samia 1 Prof. Dr. Mohammed Elmogy Outline Introduction to Artificial Intelligence Agents and Environments Good Behavior: The Concept of Rationality The Nature of Environments The Structure of Agents Simple Reflex Agents Model-based Reflex Agents Goal-based Agents Utility-based Agents Learning Agents How the Components of Agent Programs Work? 2 Prof. Dr. Mohammed Elmogy Outline Agents and Environments Good Behavior: The Concept of Rationality The Nature of Environments The Structure of Agents 3 Prof. Dr. Mohammed Elmogy Agents ► An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. ► Human Agent: eyes,ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators. ► Robotic Agent: cameras and infrared range finders for sensors; various motors for actuators. ► A software agent: keystrokes, file contents, and network packets as sensory inputs and acts on the environment by displaying on the screen, writing files, and sending network packets. 4 Prof. Dr. Mohammed Elmogy Agents (cont.) 5 Prof. Dr. Mohammed Elmogy Agents (cont.) ► An agent’s behavior is described by the agent function that maps any given percept sequence to an action. ► The agent function maps from percept histories to actions: [f : P∗ → A] ► The agent function for an artificial agent will be implemented by an agent program. ► The agent program runs on the physical architecture to produce f. agent = architecture + program 6 Prof. Dr. Mohammed Elmogy Vacuum-Cleaner World ► Percepts: location and contents, e.g., [A, Dirty ] ► Actions: Left, Right, Suck, NoOp 7 Vacuum-Cleaner World (cont.) Outline Agents and Environments Good Behavior: The Concept of Rationality The Nature of Environments The Structure of Agents Rational Agents ► An agent should strive to do the right thing, based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful. ► Performance Measure: An objective criterion for success of an agent’s behavior. ► E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. Rational Agents (cont.) They depend on: ►The performance measure that defines the criterion of success. ►The agent’s prior knowledge of the environment. ►The actions that the agent can perform. ►The agent’s percept sequence to date. Rational Agents (cont.) Rational Agent For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Features of Rational Agents ► Rational / = omniscient (all-knowing with infinite knowledge). ► percepts may not supply all relevant information. ► Rational / = clairvoyant. ► Action outcomes may not be as expected. ► Hence, rational/ = successful. Features of Rational Agents(cont.) ► Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration). ► An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt). ► Rational =⇒ exploration, learning, autonomy. Outline Agents and Environments Good Behavior: The Concept of Rationality The Nature of Environments The Structure of Agents PEAS ► To design a rational agent, we must specify the task environment which are essentially the problems to which rational agents are the solutions. ► Must first specify the setting for intelligent agent design. ► PEAS: Performance measure, Environment, Actuators, Sensors PEAS description of the task environment for an automated taxi PEAS for Pacman ► Performance measure: -1 per step; + 10 food; +500 win; -500 die; +200 hit scared ghost ► Environment: Pacman dynamics (incl ghost behavior) ► Actuators: Left, Right, Up, Down, or NEW ► Sensors: Entire state is visible (except power pellet duration) PEAS for a Medical Diagnosis System ► Performance measure: Healthy patient, minimize costs, lawsuits ► Environment: Patient, hospital, staff ► Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) ► Sensors: Keyboard (entry of symptoms, findings, patient’s answers) Examples of Agent Types Environment Types ► Fully observable (vs. partially observable): An agent’s sensors give it access to the complete state of the environment at each point in time. ► Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic). ► Episodic (vs. sequential): The agent’s experience is divided into atomic “episodes”(each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. Environment Types (cont.) ► Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semi-dynamic if the environment itself does not change with the passage of time but the agent’s performance score does) ► Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. ► Single agent (vs. multi-agent): An agent operating by itself in an environment. Environment Types (cont.) Fully observable (vs. partially observable) ▪ Is everything an agent requires to choose its actions available to it via its sensors? Perfect or Full information. - If so, the environment is fully accessible ▪ If not, parts of the environment are inaccessible - Agent must make informed guesses about world. ▪ In decision theory: perfect information vs. imperfect information. Environment Types (cont.) Deterministic (vs. stochastic) Does the change in world state - Depend only on current state and agent’s action? Non-deterministic environments - Have aspects beyond the control of the agent - Utility functions have to guess at changes in world Environment Types (cont.) Episodic (vs. sequential): ▪ Is the choice of current action - Dependent on previous actions? - If not, then the environment is episodic In non-episodic environments: - Agent has to plan ahead: ⯍ Current choice will affect future actions Environment Types (cont.) Static (vs. dynamic): Static environments don’t change - While the agent is deliberating over what to do Dynamic environments do change - So agent should/could consult the world when choosing actions - Alternatively: anticipate the change during deliberation OR make decision very fast Semidynamic: If the environment itself does not change with the passage of time but the agent's performance score does. Environment Types (cont.) Discrete (vs. continuous) A limited number of distinct, clearly defined percepts and actions vs. a range of values (continuous) Environment Types (cont.) Single agent (vs. multiagent): An agent operating by itself in an environment or there are many agents working together Examples of Task Environments Real World ► The environment type largely determines the agent design. ► The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent.