AI Agent Types & Concepts PDF

Document Details

InvulnerableGrossular

Uploaded by InvulnerableGrossular

VIT Bhopal University

Tags

artificial intelligence AI agents AI concepts knowledge representation

Summary

This document covers different types of AI agents, including table-driven, reflex, model-based, goal-based, utility-based, and learning agents. It explains the essential concepts associated with each agent type and summarizes how they function within specific contexts. The document emphasizes the roles of perception, decision-making, responding to environment/percepts, goals, and learning.

Full Transcript

Recap……. 1. Difference bw AI ML and DL -case study 2. 15 application of AI 3. Watch the video ( I ll send the link ): https://www.youtube.com/watch?v=poLZqn 2_dv4 4. Deep blue ( chess case study ) 5. Limitations in AI today Human agent: – eyes, ears, and other organs for sensor...

Recap……. 1. Difference bw AI ML and DL -case study 2. 15 application of AI 3. Watch the video ( I ll send the link ): https://www.youtube.com/watch?v=poLZqn 2_dv4 4. Deep blue ( chess case study ) 5. Limitations in AI today Human agent: – eyes, ears, and other organs for sensors; – hands, legs, mouth, and other body parts for actuators Robotic agent: – cameras and infrared range finders for sensors – various motors for actuators agent = architecture + program Recap……. 1. Difference bw AI ML and DL -case study 2. 15 application of AI 3. Watch the video ( I ll send the link ): https://www.youtube.com/watch?v=poLZqn 2_dv4 4. Deep blue ( chess case study ) 5. Limitations in AI today Recap……. PEAS?- Dance Recognition softbot system Rational Agent ? Four points? Percept ? Sensor Actuators Example –rational agents AI Vs Ml vs Dl Agent Types 1. Table-driven agent More 2. Simple reflex agent sophisticated 3. Reflex agent with internal state 4. Agent with explicit goals 5. Utility-based agent Simple Reflex Agents: Reacting Swiftly to the Present Model-Based Agents: Planning for the Future Goal-Based Agents: Working Towards Objectives Utility-Based Agents: Balancing Preferences and Trade-offs Learning Agents: Adapting and Improving Over Time agent types (1) Table-driven agents – use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup table. (2) Simple reflex agents – are based on condition-action rules, implemented with an appropriate production system. They are stateless devices which do not have memory of past world states. (3) Agents with memory - Model-based reflex agents – have internal state, which is used to keep track of past states of the world. (4) Agents with goals – Goal-based agents – are agents that, in addition to state information, have goal information that describes desirable situations. Agents of this kind take future events into consideration. (5) Utility-based agents – base their decisions on classic axiomatic utility theory in order to act rationally. (6) Learning agents – they have the ability to improve performance through learning. Recap……. 1. Difference bw AI ML and DL -case study 2. 15 application of AI 3. Watch the video ( I ll send the link ): https://www.youtube.com/watch?v=poLZqn 2_dv4 4. Deep blue ( chess case study ) 5. Limitations in AI today Recap……. PEAS?- Dance Recognition softbot system Rational Agent ? Four points? Percept ? Sensor Actuators Example –rational agents AI Vs Ml vs Dl Recap.. Environment types Example Model-based agents, such as autonomous vehicles, are essential in domains where foresight is crucial. They can anticipate the behavior of other entities and plan optimal trajectories, making them invaluable for safe and efficient navigation. Goal based Agent: In industries like logistics, goal-based agents optimize routes and distribution, minimizing costs and maximizing efficiency. Utility Agent : In fields like economics and resource management, utility- based agents handle complex decisions, optimizing resource allocation based on cost, time, and quality. Learning Agent: Notably, the capacity to adapt in real-time, incorporating fresh data and adjusting strategies, stands as a defining hallmark of these agents I) --- Table-lookup driven agents Uses a percept sequence / action table in memory to find the next action. Implemented as a (large) lookup table. Drawbacks: – Huge table (often simply too large) – Takes a long time to build/learn the table Table-driven agent function TABLE-DRIVEN-AGENT (percept) returns action static: percepts, a sequence, initially empty table, a table, indexed by percept sequences, initially fully specified append percept to the end of percepts action  LOOKUP(percepts, table) return action An agent based on a prespecified lookup table. It keeps track of percept sequence and just looks up the best action Problems – Huge number of possible percepts (consider an automated taxi with a camera as the sensor) => lookup table would be huge – Takes long time to build the table – Not adaptive to changes in the environment; requires entire table to be updated if changes occur II) --- Simple reflex agents Agents do not have memory of past world states or percepts. So, actions depend solely on current percept. Action becomes a “reflex.” Uses condition-action rules. III) --- Model-based reflex agents Key difference (wrt simple reflex agents): – Agents have internal state, which is used to keep track of past states of the world. – Agents have the ability to represent change in the World. Example: Rodney Brooks’ Subsumption Architecture --- behavior based robots. Module: [Here] Logical Agents Model-based reflex agents Representation and Reasoning: Part III/IV R&N How detailed? “Infers potentially dangerous driver in front.” If “dangerous driver in front,” then “keep distance.” An example: Brooks’ Subsumption Architecture Main idea: build complex, intelligent robots by decomposing behaviors into a hierarchy of skills, each defining a percept-action cycle for one very specific task. Examples: collision avoidance, wandering, exploring, recognizing doorways, etc. Each behavior is modeled by a finite-state machine with a few states (though each state may correspond to a complex function or module; provides internal state to the agent). Behaviors are loosely coupled via asynchronous interactions. Note: minimal internal state representation. IV) --- Goal-based agents Key difference wrt Model-Based Agents: In addition to state information, have goal information that describes desirable situations to be achieved. Agents of this kind take future events into consideration. What sequence of actions can I take to achieve certain goals? Choose actions so as to (eventually) achieve a (given or computed) goal. Module: Problem Solving Goal-based agents Considers “future” “Clean kitchen” Agent keeps track of the world state as well as set of goals it’s trying to achieve: chooses actions that will (eventually) lead to the goal(s). More flexible than reflex agents  may involve search and planning V) --- Utility-based agents When there are multiple possible alternatives, how to decide which one is best? Goals are qualitative: A goal specifies a crude distinction between a happy and unhappy state, but often need a more general performance measure that describes “degree of happiness.” Utility function U: State R indicating a measure of success or happiness when at a given state. Important for making tradeoffs: Allows decisions comparing choice between conflicting goals, and choice between likelihood of success and importance of goal (if achievement is uncertain). Use decision theoretic models: e.g., faster vs. safer. Utility-based agents Goals alone are not enough  to generate high-quality behavior  E.g. meals in Canteen, good or not ? Many action sequences  the goals  some are better and some worse  If goal means success,  then utility means the degree of success (how successful it is) Utility-based agents (4) Utility-based agents Decision theoretic actions: e.g. faster vs. safer More complicated when agent needs to learn VI) --- Learning agents utility information: Reinforcement learning (based on action payoff) Adapt and improve over time Module: Learning “Quick turn is not safe” No quick turn Road conditions, etc Takes percepts and selects actions Try out the brakes on different road surfaces Characteristics of an AI agent While AI tools and agents are software programs designed to automate tasks, specific key characteristics differentiate AI agents as more sophisticated AI software. You can consider an AI tool as an AI agent when it has the following characteristics: Autonomy: An AI virtual agent is capable of performing tasks independently without requiring constant human intervention or input. Perception: The agent function senses and interprets the environment they operate in through various sensors, such as cameras or microphones. Reactivity: An AI agent can assess the environment and respond accordingly to achieve its goals. Reasoning and decision-making: AI agents are intelligent tools that can analyze data and make decisions to achieve goals. They use reasoning techniques and algorithms to process information and take appropriate actions. Learning: They can learn and enhance their performance through machine, deep, and reinforcement learning elements and techniques. Communication: AI agents can communicate with other agents or humans using different methods, like understanding and responding to natural language, recognizing speech, and exchanging messages through text. Goal-oriented: They are designed to achieve specific goals, which can be pre-defined or learned through interactions with the environment. Summary An agent perceives and acts in an environment, has an architecture, and is implemented by an agent program. A rational agent always chooses the action which maximizes its expected performance, given its percept sequence so far. An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer. An agent program maps from percept to action and updates its internal state. – Reflex agents (simple / model-based) respond immediately to percepts. – Goal-based agents act in order to achieve their goal(s), possible sequence of steps. – Utility-based agents maximize their own utility function. – Learning agents improve their performance through learning. Representing knowledge is important for successful agent design. The most challenging environments are partially observable, stochastic, sequential, dynamic, and continuous, and contain multiple intelligent agents. Problem Solving Approach to Typical AI problems There are six major components of an artificial intelligence system. They are solely responsible for generating desired results for particular problem. These components are as follows, 1. Knowledge Representation: It is the major foundation of an artificial intelligence system. It is used for representing necessary knowledge so as to generate knowledge base with the help of which AI system can perform tasks and generate results. 2. Heuristic Searching Techniques: Usually while dealing with the problems the knowledge base keeps on growing and growing making it difficult to search in that knowledge base. To tackle with this challenge, heuristic searching techniques can be used which can provide results (because of certain criteria) efficiently in terms of time and memory usage. 3. Artificial Intelligence Hardware: Hardware compatibility is major concern when it comes to deploy software on machines. Hardware must be efficient to accommodate and produce desire results. Hardware components includes each and every machinery required spanning from memory to processor to communicating devices. Al systems incomplete without Al hardware. 4. Computer Vision and Pattern Recognition: AI programs capture the inputs on their own by generating a real world scenario with the help of this component. Sufficient and compatible hardware enables better patterns gathering that makes a useful knowledge base. 5. Natural Language Processing: This component processes or analyses written or spoken languages. Speech recognition is not sufficient to capture real world data. Acquiring the word sequence and parsing sentence into computer is not just sufficient to gain knowledge about environment for AI systems. Natural Language processing plays vital role in understanding of domain of text to AI systems. 6. Artificial Intelligence Language and Support Tools: Artificial Intelligence languages are almost similar to traditional software development programming languages with additional feature to capture human brain processes and logic as much as possible.

Use Quizgecko on...
Browser
Browser