Document Details
Uploaded by NonViolentMountain
Tags
Related
Full Transcript
AGENT IN AI An AI system can be defined as the study of the rational agent and its environment. The agents sense the environment through sensors and act on their environment through actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc. An agent can...
AGENT IN AI An AI system can be defined as the study of the rational agent and its environment. The agents sense the environment through sensors and act on their environment through actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc. An agent can be anything that perceive its environment through sensors and act upon that environment through actuators. An agent can be: Human-Agent: It has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators. Robotic Agent: It can have cameras, infrared range finder, NLP for sensors and various motors for actuators. Software Agent: It can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen. Devices of Agents are: Sensor: It is a device that detects changes in the environment and sends information to other electronic devices. An agent observes its environment through sensors. Actuators: They are the components of machines that convert energy into motion. The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc. Effectors: They are the devices that affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins, and display screens. REAL TIME EXAMPLE – Automatic Door Component Definition Real-Time Example A device that detects and measures physical properties and Motion Sensor: Detects the presence of Sensor sends the information to other a person approaching the door. devices. Electric Motor: Opens and closes the A device that converts energy Actuator door based on signals from the motion into motion or controls a system. sensor. A part of a system that acts in response to a signal from a Door Movement: The actual movement Effector control system, often part of the of the door opening or closing. actuator. the main four rules for an AI agent: Rule 1: An AI agent must have the ability to perceive the environment. Rule 2: The observation must be used to make decisions. Rule 3: Decision should result in an action. Rule 4: The action taken by an AI agent must be a rational action(logical). Types of Agents in Artificial Intelligence 1. Reflex Agents: These agents work here and now and ignore the past. They respond using the event-condition-action rule. Real-Time Example – Automatic Light Component Definition Switch Detects an immediate Sensor condition in the Motion Sensor: Detects movement in a room. environment. Converts signals into Actuator actions. Relay Switch: Turns the light on or off. Immediate response Lighting: The lights turn on when motion is detected Reflex Action based on a specific condition. and off when no motion is detected for a set period. Simple reflex agents are known as simplest agents. Performs an action on present percept and it overlooks the percept history. It follows Condition-action rule that means the agent will map the current state to action. its environment is fully observable. Problems with Simple reflex agents are : Very limited intelligence. No knowledge of non-perceptual parts of state. Usually too big to generate and store. If there occurs any change in the environment, then the collection of rules need to be updated. 2. Model-based Agents: These agents choose their actions like reflex agents do, but they have a better comprehensive view of the environment. Model-based agents maintain an internal state to keep track of the world and use this state to make more informed decisions. They rely on a model of the environment to predict the outcomes of their actions and update their knowledge accordingly Component Definition Real-Time Example- Smart Home HVAC System Detects conditions in the Temperature, Humidity Sensors: Collect data about the Sensor environment. indoor climate. Maintains an internal Environmental Model: Tracks changes in temperature and Internal Model representation of the world. humidity over time. Actuator Converts signals into actions. Heater, Air Conditioner, Humidifier: Adjusts the indoor climate based on the agent's decisions. Decision Uses the internal model to Climate Control Algorithms: Predict future temperature and predict outcomes and plan Making actions. humidity levels to maintain comfort and efficiency. It works by finding a rule whose condition matches the current situation. It works in a partially observable environment and prefers to track the state. It has two important parts: Model: it means “how things happen in the world,” -- known as a Model-based agent. Internal State: Representation of the current state depending on percept history. Updating the state requires information about: how the world evolves in-dependently from the agent, and how the agent actions affects the world. 3. Goal-based agents: Goal-based agents work towards achieving specific goals by planning and executing actions that lead to the desired outcomes. These agents evaluate the current state and compare it to the goal state, choosing actions that bring them closer to their goals Component Definition Real-Time Example – Self driving Cars Detects conditions in the LIDAR, Cameras, GPS: Collect data about the surroundings, Sensor environment. traffic conditions, and location. Maintains an internal Mapping and Localization System: Uses sensor data to create a Internal Model representation of the world. map and track the car's position. Desired state or outcome to Goal achieve. Destination: The end location that the car aims to reach. Path Planning Algorithms: Calculate the optimal route to the Decision Making Plans actions to achieve the goal. destination, avoiding obstacles and following traffic rules. Goal − It is the description of desirable situations Goal-Based Agents select their actions to get goals -- more flexible. The current state environment knowledge is not enough to make a decision for an agent to what to do. it uses searching and planning scenarios that make an agent proactive. 4. Utility-based agents: These are comparable to the goal-based agents, except they offer an extra utility measurement. This measurement rates each possible scenario based on the desired result and selects the action that maximizes the outcome. Detects conditions in Sensor Temperature, Humidity Sensors: Collect data about the indoor climate. the environment. Maintains an internal Internal Model Climate Model: Tracks changes in temperature and humidity over time. representation of the world. Measures the Utility Function desirability of Comfort and Energy Efficiency: Balances comfort level and energy consumption. different states. Plans actions to Climate Control Algorithms: Adjust heating, cooling, and humidity to maximize Decision Making maximize utility. comfort and minimize energy use. Converts signals Heater, Air Conditioner, Humidifier: Adjusts the indoor climate based on the agent's Actuator into actions. decisions. They choose actions based on a preference (utility) for each state. Goals are inadequate when there are conflicting goals, out of which only few can be achieved. Goals have some uncertainty of being achieved and you need to weigh likelihood of success against the importance of a goal. 5. Learning agents: These agents employ an additional learning element to gradually improve and become more knowledgeable over time about an environment. The learning element uses feedback to decide how the performance elements should be gradually changed to show improvement. Component Definition Real-Time Example – siri Alexa Microphone, User Input: Collect voice commands and text input from the Sensor Detects conditions in the environment. user. Internal Maintains an internal representation of the User Preferences and Context: Tracks user preferences and context for Model world. personalized responses. Learning Improves the agent's knowledge and Machine Learning Algorithms: Analyzes user interactions to improve Element performance. speech recognition and response accuracy. Performance Executes actions based on the current Natural Language Processing (NLP): Understands and processes user Element state. commands to perform tasks. Critic Provides feedback on the agent's actions. User Feedback: Evaluates the assistant's performance based on user satisfaction and corrective inputs. Problem Recommendation System: Proposes new features or questions to better Suggests new actions to explore. Generator understand user preferences. A learning agent in AI is the type of agent which can learn from its past experiences or it has learning capabilities. It starts to act with basic knowledge and then able to act and adapt automatically through learning. A learning agent has mainly four conceptual components, which are: 1.Learning element :It is responsible for making improvements by learning from the environment 2.Critic: Learning element takes feedback from critic which describes how well the agent is doing with respect to a fixed performance standard. 3.Performance element: It is responsible for selecting external action 4.Problem Generator: This component is responsible for suggesting actions that will lead to new and informative experiences. THE STRUCTURE OF INTELLIGENT AGENTS Agent = Architecture + Agent Program Architecture = the machinery that an agent executes on. Agent Function= Agent function is used to map a percept to an action Agent Program = an implementation of an agent function. The job of AI is to design the agent program that implements the agent function mapping percepts to actions Characteristics of an intelligent agent Rationality : Perfect Rationality assumes that the rational agent knows all and will take the action that maximizes her utility. Rational Action is the action that maximizes the expected value of the performance measure given the precept sequence to date. Bounded Rationality : The property of an agent that behaves in a manner that is nearly optimal with respect to its goals as its resources will allow –an intelligent agent will be expected to act optimally to the best of its abilities and its resource constraints under the agent environment. Agent Environment : Environments in which agents operate can be defined in different ways. Observability : In terms of observability, an environment can be characterized as fully observable or partially observable. In a fully observable environment, all of the environment relevant to the action being considered is observable - the agent does not need to keep track of the changes in the environment. Ex: chess playing system. In a partially observable environment, the relevant features of the environment are only partially observable. Ex: A bridge playing program Determinism : In deterministic environments, the next state of the environment is completely described by the current state and the agent’s action.Ex: Image analysis systems - the processed image is determined completely by the current image and the processing operations. Episodicity : An episodic environment means that subsequent episodes do not depend on what actions occurred in previous episodes. In a sequential environment, the agent engages in a series of connected episodes. Dynamism: Static Environment: does not change from one state to the next while the agent is considering its course of action. The only changes to the environment are those caused by the agent itself. A static environment does not change while the agent is thinking The passage of time as an agent deliberates is irrelevant. The agent doesn’t need to observe the world during deliberation. A Dynamic Environment changes over time independent of the actions of the agent -- and thus if an agent does not respond in a timely manner, this counts as a choice to do nothing. Continuity : If the number of distinct percepts and actions is limited, the environment is discrete, otherwise it is continuous. PEAS - Performance, Environment, Actuators, Sensors Specifying the task environment is always the first step in designing agent