🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

2-Intelligance Agent.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Artificial Intelligence (CS340) Intelligent Agents (chap 2) (Part One) Artificial Intelligence(chap 1), Page 1 Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types...

Artificial Intelligence (CS340) Intelligent Agents (chap 2) (Part One) Artificial Intelligence(chap 1), Page 1 Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence(chap 1), Page 2 Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. Human agent:  eyes, ears, and other organs for sensors;  hands, legs, mouth, and other body parts for actuators Robotic agent: cameras and infrared range finders for sensors; various motors for actuators Artificial Intelligence(chap 1), Page 3 Agents Artificial Intelligence(chap 1), Page 4 Agents and environments The agent function maps from percept histories to actions: [f: P*  A] The agent program runs on the physical architecture to produce f agent = architecture + program Artificial Intelligence(chap 1), Page 5 Vacuum-cleaner world Percepts: location and contents, e.g., [A, Dirty] Actions: Left, Right, Suck, NoOp Artificial Intelligence(chap 1), Page 6 A vacuum-cleaner agent What is the right function? Can it be implemented in as a small agent program? Artificial Intelligence(chap 1), Page 7 Rational agents  An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform.  The right action is the one that will cause the agent to be most successful.  Performance measure: An objective criterion for success of an agent's behavior. E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. Artificial Intelligence(chap 1), Page 8 Rational agents Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Rationality is distinct from omniscience (all-knowing with infinite knowledge) An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality. Artificial Intelligence(chap 1), Page 9 Rational agents Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)  A rational agent should be autonomous.  a vacuum-cleaning agent that learns to foresee where and when additional dirt will appear will do better than one that does not. Artificial Intelligence(chap 1), Page 10 Specifying the task environment: PEAS PEAS: Performance measure, Environment, Actuators, Sensors Task environment  Performance measure  Environment  Actuators  Sensors Artificial Intelligence(chap 1), Page 11 PEAS Agent: Medical diagnosis system Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers) Artificial Intelligence(chap 1), Page 12 PEAS Agent: Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors Artificial Intelligence(chap 1), Page 13 PEAS Agent: Interactive English tutor  Performance measure: Maximize student's score on test  Environment: Set of students  Actuators: Screen display (exercises, suggestions, corrections)  Sensors: Keyboard Artificial Intelligence(chap 1), Page 14 Environment types Fully observable vs. partially observable  Fully observable: an agent's sensors give it access to the complete state of the environment at each point in time. Single agent vs. multiagent  Single agent: an agent operating by itself in an environment.  Example: an agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two-agent environment.  Multiagent environment Competitive : chess Cooperative : taxi Artificial Intelligence(chap 1), Page 15 Environment types Deterministic vs. Nondeterministic Deterministic: actions are predictable. If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic. Examples: Taxi driving is clearly stochastic in this sense, because one can never predict the behavior of traffic exactly. The vacuum world as we described it is deterministic, but variations can include stochastic elements such as randomly appearing dirt and an unreliable suction mechanism. Artificial Intelligence(chap 1), Page 16 Environment types Static vs. Dynamic  If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise, it is static.  Semidynamic: If the environment itself does not change with the passage of time but the agent’s performance score.  Examples: Dynamic: Taxi driving Semidynamic: Chess, when played with a clock Static: Crossword puzzles Artificial Intelligence(chap 1), Page 17 Environment types Discrete vs. Continuous.  A limited number of distinct, clearly defined percepts and actions.  Examples: Chess has a discrete set of percepts and actions. Taxi driving is a continuous-state and continuous-time problem. Taxi-driving actions and percepts are also continuous. Artificial Intelligence(chap 1), Page 18 Environment types Episodic vs. Sequential  Does the next “episode” depend on the actions taken in previous episodes?  Episodic: The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.  In sequential environments: the current decision could affect all future decisions.  Episodic environments are much simpler than sequential environments because the agent does not need to think ahead. Artificial Intelligence(chap 1), Page 19 Environment types Episodic vs. Sequential  Examples: Many classification tasks are episodic, such as an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions; moreover, the current decision doesn’t affect whether the next part is defective. Chess and taxi driving are sequential. Artificial Intelligence(chap 1), Page 20 Environment types The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent Chess with Chess without Taxi Crossword Medical a clock a clock driving puzzle diagnosis Fully observable Yes Yes No Yes No Deterministic Strategic Strategic No Yes No Episodic No No No No No Static Semi Yes No Yes No Discrete Yes Yes No Yes No Single agent No No No Yes Yes Artificial Intelligence(chap 1), Page 21 Agent functions and programs An agent is completely specified by the agent function mapping percept sequences to actions. The agent program implements the agent function mapping percepts sequences to actions Agent=architecture + program. Architecture= sort of computing device with physical sensors and actuators. Aim of AI is to design the agent program: find a way to implement the rational agent function concisely. Artificial Intelligence(chap 1), Page 22 Table-lookup agent Function Table-Driven-Agent(percept) Static: percepts, a sequence, initially empty table, a table of actions, indexed by percept sequences, initially fully specified append percept to the end of percepts action

Use Quizgecko on...
Browser
Browser