Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

FreshEnlightenment1211

Uploaded by FreshEnlightenment1211

Tags

artificial intelligence agent environments computer science

Full Transcript

Environments Task Environments are problem spaces to which agents are solutions Directly effects the appropriate design of the agents How to specify task environment? Specification of Task environment: PEAS Performance Measure Environment Actuators...

Environments Task Environments are problem spaces to which agents are solutions Directly effects the appropriate design of the agents How to specify task environment? Specification of Task environment: PEAS Performance Measure Environment Actuators Sensors Agent Performance Measure Environment Actuators Sensors Autonomous Taxi Safe, fast, legal, Roads, other traffic, Steering, Accelerator, Cameras, Sonar, comfortable trip Pedestrians, Brake, Signal, Horn, Speedometer, GPS, Customers Display Accelerometer, Engine sensors, Keyboard Part Picking Robot Percentage of parts in Conveyor belt with Jointed arm and hand Camera, Joint angle correct bins parts, bins sensor Environments Properties of Environment Fully Observable Vs. Partially Observable A task environment is effectively fully observable if the sensors detect all aspects that are relevant to choice of actions. Relevance depends on choice of performance measure In a fully observable environment, agent need not maintain an internal state to keep track of the world Partially Observable Noisy, inaccurate sensors, parts of environment state missing from sensor data Ex: vacuum agent with only a local sensor can not tell whether there is dirt in other squares Ex: autonomous taxi can not see what other drivers are thinking Unobservable If the agent has no sensors Environments Single agent Vs. Multiple Agent Single agent Agent solving crossword puzzle Multiple agent Agent playing chess - two agent Which entities must be viewed as agents? Or objects in environment? Key distinction: whether B’s behaviour is best described as maximizing a performance measure whose value is depends on A’s behaviour Competitive multi agent Environment Ex: Chess – opponent B is trying to maximize its performance measure, which , by the rules of chess, minimizes agent A’s performance. (randomized behaviour is rational sometimes as it avoids pitfalls of predictability. How?) Cooperative multi environment Ex: Taxi driving environment – avoiding collisions maximizes the performance measure of all agents ( this environment is also partially competitive. Why?) Environments Deterministic vs. Stochastic Deterministic If the next state of the environment is completely determined by the current state and the action executed by the agent, the environment is deterministic, else Stochastic If an environment is fully observable, deterministic, agent need not worry about uncertainty. Ex: vacuum world Non Deterministic An environment is uncertain if it is not fully observable, or non- deterministic Actions are characterised by their possible outcomes, no probabilities are attached to the outcomes Actions associated with performance measure require agent to succeed for all possible outcomes of its actions Stochastic Uncertainty about outcomes is characterised by their probabilities Ex: autonomous Taxi driving Environments Episodic vs. Sequential Episodic Agents experience is divided into episodes. In each episode, the agent receives a percept, and then performs a single action The next episode does not depend on the actions taken in previous episodes Ex: classification tasks – an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions. The current decision does not effect whether the next part is defective Sequential Current decision could effect all future decisions Ex: chess, taxi driving – short term actions can have long-term consequences Episodic environments are much simpler to design than sequential environments. Why? Environments Static vs. Dynamic If the environment state can change while an agent is deliberating an action, then the environment is Dynamic, else Static. Ex: Autonomous taxi driving is dynamic. Why? Crossword puzzle is static Semi-dynamic If the environment itself does not change with the passage of time, but the agent’s performance score does change, then the environment is semi- dynamic Ex: chess when played with a clock. Why? Static environments are easy to design for as agent need not keep looking at the world, while it is deciding on an action, nor it need to worry about the passage of time Environments Discrete vs. Continuous Applies to the state of the environment, to the percepts and agents actions To the way time is handled Ex: Chess – has a finite number of distinct states, has a discrete set of percepts and actions Ex: Autonomous Taxi driving – is a continuous state and continuous time problem, taxi driving actions are continuous Known vs. Unknown In a known environment, the outcomes or outcome probabilities for all actions are given. Known environment can be partially observable Ex: Solitaire card games. If the environment is unknown, the agent will have to learn how it works in order to make good decisions. Unknown environment can be fully observable Hardest case: partially observable, multi-agent, stochastic, sequential, dynamic, continuous and unknown! Ex: Taxi driving is hard in all these senses, except that for the most part, driver’s environment is known Environments Environme Observable Agents Determinis Episodic/ Static/ Discrete/ nt/ tic/ sequential Dynamic Continuous specificatio Stochastic n Part Partially single Stochastic Episodic dynamic Continuous picking Robot Autonomo Partially multi stochastic sequential dynamic continuous us Taxi Driving Crossword Fully single determinist sequential static discrete Puzzle ic Chess with Fully multi determinist sequential Semi- discrete a Clock ic dynamic

Use Quizgecko on...
Browser
Browser