Summary

This document details artificial intelligence, specifically focusing on agents, their architecture, and their various types. Information on agent functionality and different types of environments is also included.

Full Transcript

# AI: Lecture 3 ## Agents * **Agents = Architecture + Program** * Portion of the **environment** sensed by the **environment sensors** is brought into the **program**. * **Environment Types** * **Fully Observable:** The environment is entirely visible to the agent, with...

# AI: Lecture 3 ## Agents * **Agents = Architecture + Program** * Portion of the **environment** sensed by the **environment sensors** is brought into the **program**. * **Environment Types** * **Fully Observable:** The environment is entirely visible to the agent, with no changes happening relating to the environment's nature during the agent's actions. Ex: Chessboard - the board stays the same (8x8 squares). * **Partially Observable:** The agent is required to move step-by-step, point-by-point, to detect what is in the environment. Ex: Self-driving car - cannot detect traffic lights or the location of the driver ahead until approaching them. * **Single Agent**: The agent is the only intelligent entity in the environment, for example a self-driving car. * **Multi-agents**: There are multiple agents interacting with each other in the environment - this requires communication between agents to avoid conflicts. * **Deterministic**: The environment is determined by the agent's actions, for example a straight road with no chance of obstacles. * **Stochastic**: The environment is potentially chaotic, with outcomes beyond the agent's control, meaning possible outcomes are based on a probability. * **Episodic**: The environment is a series of states. The decisions at a given state do not impact the outcome of subsequent states. * **Sequential**: The environment is one continuous state, and actions taken at one point affect the future state, and therefore subsequent decisions. * **Dynamic**: The environment is constantly changing, and these changes may affect the agent's perception of the environment during a continuous state. * **Static**: The environment is unchanging, such that the agent can safely assume actions will yield the same results. * **Discrete:** The environment has a finite number of states. * **Continuous** The environment has an infinite number of possible states. * **Known:** The agent knows the environment's rules. The problem is to know the environment's rules. * **Unknown:** The agent does not know the environment's rules, or the rules are constantly evolving based on events that occur. ## Architecture * **Hardware:** The physical embodiment of the agent, which is dependent on the environment. It is important to match the hardware to the environment to ensure the agent's success. * **Agent function:** A function that maps inputs (coming from the environment) to outputs (coming from the environment). * **Perceptions** The agent must sense its current environment to gather information that can be used for decision-making. Perceptions may be a sequence of perceptions * **Memory:** The decision-making process based on the environment requires an understanding of the current conditions in the environment. This is achieved via a memory which maintains a record of the current state * **Percept update:** the memory is updated based on newly acquired perceptions in each step. * **Return action:** The decision-making process is based on the perceived environment and current memory. The action taken is based on the environment and the action returned to the environment. ## Example scenario * **Tooth up action:** * **Sensing data:** The table is updated with the data. * **Action:** Update the table. * **Table:** The table is designed based on the time available, the table's update, and the latest decision. ## Types of Agents * **Simple Reflex Agent:** The agent has a look-up table that maps sensed states to actions. * **Model-Based Reflex Agent:** The agent has a model of the environment (a representation of how the environment works, including possible outcomes for various actions). * **Goal-Based Agent:** The agent acts to achieve a particular goal. * **Utility-Based Agents:** The agent acts to maximise its utility, which is a combination of achieving multiple goals. ## Learning Agents * **Learning agents:** can improve their performance based on experience. ## Simple Reflex Agent * **What the world is like now?** The agent must sense its current environment, then act in response. * **Sensing data:** The agent senses the current state via available sensors. ## Model-Based Reflex Agent * The agent will not just use its current environment to make its decision, but will also use what it has learned about the environment to calculate the best action based on the calculated outcome. ## Utility-Based Agents * The agent tries to satisfy multiple goals, and must decide how to best achieve all goals simultaneously. ## Learning Agents * The goal of a learning agent is to achieve the maximum satisfaction. This is achieved through a trial-and-error process.

Use Quizgecko on...
Browser
Browser